00:00:00.001 Started by upstream project "autotest-per-patch" build number 132685 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:20.107 The recommended git tool is: git 00:00:20.108 using credential 00000000-0000-0000-0000-000000000002 00:00:20.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:20.123 Fetching changes from the remote Git repository 00:00:20.130 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:20.143 Using shallow fetch with depth 1 00:00:20.143 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:20.143 > git --version # timeout=10 00:00:20.154 > git --version # 'git version 2.39.2' 00:00:20.154 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:20.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:20.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:29.450 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:29.464 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:29.477 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:29.477 > git config core.sparsecheckout # timeout=10 00:00:29.489 > git read-tree -mu HEAD # timeout=10 00:00:29.508 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:29.538 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:29.538 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:29.696 [Pipeline] Start of Pipeline 00:00:29.710 [Pipeline] library 00:00:29.712 Loading library shm_lib@master 00:00:29.712 Library shm_lib@master is cached. Copying from home. 00:00:29.728 [Pipeline] node 00:01:04.139 Still waiting to schedule task 00:01:04.140 Waiting for next available executor on ‘vagrant-vm-host’ 00:16:32.988 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:16:32.990 [Pipeline] { 00:16:33.003 [Pipeline] catchError 00:16:33.005 [Pipeline] { 00:16:33.019 [Pipeline] wrap 00:16:33.031 [Pipeline] { 00:16:33.040 [Pipeline] stage 00:16:33.042 [Pipeline] { (Prologue) 00:16:33.064 [Pipeline] echo 00:16:33.065 Node: VM-host-SM4 00:16:33.073 [Pipeline] cleanWs 00:16:33.084 [WS-CLEANUP] Deleting project workspace... 00:16:33.084 [WS-CLEANUP] Deferred wipeout is used... 00:16:33.090 [WS-CLEANUP] done 00:16:33.288 [Pipeline] setCustomBuildProperty 00:16:33.381 [Pipeline] httpRequest 00:16:33.700 [Pipeline] echo 00:16:33.702 Sorcerer 10.211.164.20 is alive 00:16:33.713 [Pipeline] retry 00:16:33.715 [Pipeline] { 00:16:33.733 [Pipeline] httpRequest 00:16:33.738 HttpMethod: GET 00:16:33.738 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:33.739 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:33.740 Response Code: HTTP/1.1 200 OK 00:16:33.741 Success: Status code 200 is in the accepted range: 200,404 00:16:33.741 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:33.887 [Pipeline] } 00:16:33.906 [Pipeline] // retry 00:16:33.914 [Pipeline] sh 00:16:34.199 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:34.217 [Pipeline] httpRequest 00:16:34.519 [Pipeline] echo 00:16:34.522 Sorcerer 10.211.164.20 is alive 00:16:34.535 [Pipeline] retry 00:16:34.538 [Pipeline] { 00:16:34.556 [Pipeline] httpRequest 00:16:34.561 HttpMethod: GET 00:16:34.562 URL: http://10.211.164.20/packages/spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:16:34.564 Sending request to url: http://10.211.164.20/packages/spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:16:34.566 Response Code: HTTP/1.1 200 OK 00:16:34.566 Success: Status code 200 is in the accepted range: 200,404 00:16:34.568 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:16:36.838 [Pipeline] } 00:16:36.860 [Pipeline] // retry 00:16:36.867 [Pipeline] sh 00:16:37.251 + tar --no-same-owner -xf spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:16:40.638 [Pipeline] sh 00:16:40.914 + git -C spdk log --oneline -n5 00:16:40.914 688351e0e test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:16:40.914 2826724c4 test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:16:40.914 94ae61614 test/nvmf: Prepare replacements for the network setup 00:16:40.914 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:16:40.914 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:16:40.931 [Pipeline] writeFile 00:16:40.946 [Pipeline] sh 00:16:41.224 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:16:41.236 [Pipeline] sh 00:16:41.517 + cat autorun-spdk.conf 00:16:41.517 SPDK_RUN_FUNCTIONAL_TEST=1 00:16:41.517 SPDK_TEST_NVMF=1 00:16:41.517 SPDK_TEST_NVMF_TRANSPORT=tcp 00:16:41.517 SPDK_TEST_USDT=1 00:16:41.517 SPDK_TEST_NVMF_MDNS=1 00:16:41.517 SPDK_RUN_UBSAN=1 00:16:41.517 NET_TYPE=virt 00:16:41.517 SPDK_JSONRPC_GO_CLIENT=1 00:16:41.517 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:41.524 RUN_NIGHTLY=0 00:16:41.526 [Pipeline] } 00:16:41.541 [Pipeline] // stage 00:16:41.557 [Pipeline] stage 00:16:41.559 [Pipeline] { (Run VM) 00:16:41.572 [Pipeline] sh 00:16:41.853 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:16:41.854 + echo 'Start stage prepare_nvme.sh' 00:16:41.854 Start stage prepare_nvme.sh 00:16:41.854 + [[ -n 9 ]] 00:16:41.854 + disk_prefix=ex9 00:16:41.854 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:16:41.854 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:16:41.854 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:16:41.854 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:16:41.854 ++ SPDK_TEST_NVMF=1 00:16:41.854 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:16:41.854 ++ SPDK_TEST_USDT=1 00:16:41.854 ++ SPDK_TEST_NVMF_MDNS=1 00:16:41.854 ++ SPDK_RUN_UBSAN=1 00:16:41.854 ++ NET_TYPE=virt 00:16:41.854 ++ SPDK_JSONRPC_GO_CLIENT=1 00:16:41.854 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:41.854 ++ RUN_NIGHTLY=0 00:16:41.854 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:16:41.854 + nvme_files=() 00:16:41.854 + declare -A nvme_files 00:16:41.854 + backend_dir=/var/lib/libvirt/images/backends 00:16:41.854 + nvme_files['nvme.img']=5G 00:16:41.854 + nvme_files['nvme-cmb.img']=5G 00:16:41.854 + nvme_files['nvme-multi0.img']=4G 00:16:41.854 + nvme_files['nvme-multi1.img']=4G 00:16:41.854 + nvme_files['nvme-multi2.img']=4G 00:16:41.854 + nvme_files['nvme-openstack.img']=8G 00:16:41.854 + nvme_files['nvme-zns.img']=5G 00:16:41.854 + (( SPDK_TEST_NVME_PMR == 1 )) 00:16:41.854 + (( SPDK_TEST_FTL == 1 )) 00:16:41.854 + (( SPDK_TEST_NVME_FDP == 1 )) 00:16:41.854 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:16:41.854 + for nvme in "${!nvme_files[@]}" 00:16:41.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:16:41.854 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:16:41.854 + for nvme in "${!nvme_files[@]}" 00:16:41.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:16:41.854 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:16:41.854 + for nvme in "${!nvme_files[@]}" 00:16:41.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:16:41.854 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:16:41.854 + for nvme in "${!nvme_files[@]}" 00:16:41.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:16:41.854 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:16:41.854 + for nvme in "${!nvme_files[@]}" 00:16:41.854 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:16:42.113 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:16:42.113 + for nvme in "${!nvme_files[@]}" 00:16:42.113 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:16:42.113 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:16:42.113 + for nvme in "${!nvme_files[@]}" 00:16:42.113 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:16:43.049 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:16:43.049 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:16:43.049 + echo 'End stage prepare_nvme.sh' 00:16:43.049 End stage prepare_nvme.sh 00:16:43.061 [Pipeline] sh 00:16:43.415 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:16:43.415 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -H -a -v -f fedora39 00:16:43.415 00:16:43.415 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:16:43.415 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:16:43.415 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:16:43.415 HELP=0 00:16:43.415 DRY_RUN=0 00:16:43.415 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img, 00:16:43.415 NVME_DISKS_TYPE=nvme,nvme, 00:16:43.415 NVME_AUTO_CREATE=0 00:16:43.415 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img, 00:16:43.415 NVME_CMB=,, 00:16:43.415 NVME_PMR=,, 00:16:43.415 NVME_ZNS=,, 00:16:43.415 NVME_MS=,, 00:16:43.415 NVME_FDP=,, 00:16:43.415 SPDK_VAGRANT_DISTRO=fedora39 00:16:43.415 SPDK_VAGRANT_VMCPU=10 00:16:43.415 SPDK_VAGRANT_VMRAM=12288 00:16:43.415 SPDK_VAGRANT_PROVIDER=libvirt 00:16:43.415 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:16:43.415 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:16:43.415 SPDK_OPENSTACK_NETWORK=0 00:16:43.415 VAGRANT_PACKAGE_BOX=0 00:16:43.415 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:16:43.415 FORCE_DISTRO=true 00:16:43.415 VAGRANT_BOX_VERSION= 00:16:43.415 EXTRA_VAGRANTFILES= 00:16:43.415 NIC_MODEL=e1000 00:16:43.415 00:16:43.415 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:16:43.415 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:16:47.602 Bringing machine 'default' up with 'libvirt' provider... 00:16:48.540 ==> default: Creating image (snapshot of base box volume). 00:16:48.540 ==> default: Creating domain with the following settings... 00:16:48.540 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733396412_72bcd28186fcf9b3a0d3 00:16:48.540 ==> default: -- Domain type: kvm 00:16:48.540 ==> default: -- Cpus: 10 00:16:48.540 ==> default: -- Feature: acpi 00:16:48.540 ==> default: -- Feature: apic 00:16:48.540 ==> default: -- Feature: pae 00:16:48.540 ==> default: -- Memory: 12288M 00:16:48.540 ==> default: -- Memory Backing: hugepages: 00:16:48.540 ==> default: -- Management MAC: 00:16:48.540 ==> default: -- Loader: 00:16:48.540 ==> default: -- Nvram: 00:16:48.540 ==> default: -- Base box: spdk/fedora39 00:16:48.540 ==> default: -- Storage pool: default 00:16:48.540 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733396412_72bcd28186fcf9b3a0d3.img (20G) 00:16:48.540 ==> default: -- Volume Cache: default 00:16:48.540 ==> default: -- Kernel: 00:16:48.540 ==> default: -- Initrd: 00:16:48.540 ==> default: -- Graphics Type: vnc 00:16:48.540 ==> default: -- Graphics Port: -1 00:16:48.540 ==> default: -- Graphics IP: 127.0.0.1 00:16:48.540 ==> default: -- Graphics Password: Not defined 00:16:48.540 ==> default: -- Video Type: cirrus 00:16:48.540 ==> default: -- Video VRAM: 9216 00:16:48.540 ==> default: -- Sound Type: 00:16:48.540 ==> default: -- Keymap: en-us 00:16:48.540 ==> default: -- TPM Path: 00:16:48.540 ==> default: -- INPUT: type=mouse, bus=ps2 00:16:48.540 ==> default: -- Command line args: 00:16:48.540 ==> default: -> value=-device, 00:16:48.540 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:16:48.540 ==> default: -> value=-drive, 00:16:48.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:16:48.540 ==> default: -> value=-device, 00:16:48.540 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:48.540 ==> default: -> value=-device, 00:16:48.540 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:16:48.540 ==> default: -> value=-drive, 00:16:48.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:16:48.540 ==> default: -> value=-device, 00:16:48.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:48.540 ==> default: -> value=-drive, 00:16:48.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:16:48.540 ==> default: -> value=-device, 00:16:48.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:48.540 ==> default: -> value=-drive, 00:16:48.540 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:16:48.540 ==> default: -> value=-device, 00:16:48.540 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:48.799 ==> default: Creating shared folders metadata... 00:16:48.799 ==> default: Starting domain. 00:16:50.717 ==> default: Waiting for domain to get an IP address... 00:17:08.803 ==> default: Waiting for SSH to become available... 00:17:08.803 ==> default: Configuring and enabling network interfaces... 00:17:12.983 default: SSH address: 192.168.121.134:22 00:17:12.984 default: SSH username: vagrant 00:17:12.984 default: SSH auth method: private key 00:17:15.529 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:17:23.641 ==> default: Mounting SSHFS shared folder... 00:17:25.542 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:17:25.542 ==> default: Checking Mount.. 00:17:26.920 ==> default: Folder Successfully Mounted! 00:17:26.920 ==> default: Running provisioner: file... 00:17:27.486 default: ~/.gitconfig => .gitconfig 00:17:28.071 00:17:28.071 SUCCESS! 00:17:28.071 00:17:28.071 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:17:28.071 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:17:28.071 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:17:28.071 00:17:28.079 [Pipeline] } 00:17:28.091 [Pipeline] // stage 00:17:28.100 [Pipeline] dir 00:17:28.101 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:17:28.102 [Pipeline] { 00:17:28.111 [Pipeline] catchError 00:17:28.112 [Pipeline] { 00:17:28.123 [Pipeline] sh 00:17:28.401 + vagrant ssh-config --host vagrant 00:17:28.401 + sed+ -ne /^Host/,$p 00:17:28.401 tee ssh_conf 00:17:31.681 Host vagrant 00:17:31.681 HostName 192.168.121.134 00:17:31.681 User vagrant 00:17:31.681 Port 22 00:17:31.681 UserKnownHostsFile /dev/null 00:17:31.681 StrictHostKeyChecking no 00:17:31.681 PasswordAuthentication no 00:17:31.681 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:17:31.681 IdentitiesOnly yes 00:17:31.681 LogLevel FATAL 00:17:31.681 ForwardAgent yes 00:17:31.681 ForwardX11 yes 00:17:31.681 00:17:31.695 [Pipeline] withEnv 00:17:31.698 [Pipeline] { 00:17:31.713 [Pipeline] sh 00:17:31.993 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:17:31.993 source /etc/os-release 00:17:31.993 [[ -e /image.version ]] && img=$(< /image.version) 00:17:31.993 # Minimal, systemd-like check. 00:17:31.993 if [[ -e /.dockerenv ]]; then 00:17:31.993 # Clear garbage from the node's name: 00:17:31.993 # agt-er_autotest_547-896 -> autotest_547-896 00:17:31.993 # $HOSTNAME is the actual container id 00:17:31.993 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:17:31.993 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:17:31.993 # We can assume this is a mount from a host where container is running, 00:17:31.993 # so fetch its hostname to easily identify the target swarm worker. 00:17:31.993 container="$(< /etc/hostname) ($agent)" 00:17:31.993 else 00:17:31.993 # Fallback 00:17:31.993 container=$agent 00:17:31.993 fi 00:17:31.993 fi 00:17:31.993 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:17:31.993 00:17:32.262 [Pipeline] } 00:17:32.278 [Pipeline] // withEnv 00:17:32.287 [Pipeline] setCustomBuildProperty 00:17:32.302 [Pipeline] stage 00:17:32.305 [Pipeline] { (Tests) 00:17:32.324 [Pipeline] sh 00:17:32.602 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:17:32.875 [Pipeline] sh 00:17:33.154 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:17:33.425 [Pipeline] timeout 00:17:33.425 Timeout set to expire in 1 hr 0 min 00:17:33.427 [Pipeline] { 00:17:33.442 [Pipeline] sh 00:17:33.719 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:17:34.299 HEAD is now at 688351e0e test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:17:34.311 [Pipeline] sh 00:17:34.592 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:17:34.866 [Pipeline] sh 00:17:35.149 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:17:35.427 [Pipeline] sh 00:17:35.709 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:17:35.967 ++ readlink -f spdk_repo 00:17:35.967 + DIR_ROOT=/home/vagrant/spdk_repo 00:17:35.967 + [[ -n /home/vagrant/spdk_repo ]] 00:17:35.967 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:17:35.967 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:17:35.967 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:17:35.967 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:17:35.967 + [[ -d /home/vagrant/spdk_repo/output ]] 00:17:35.967 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:17:35.967 + cd /home/vagrant/spdk_repo 00:17:35.967 + source /etc/os-release 00:17:35.967 ++ NAME='Fedora Linux' 00:17:35.967 ++ VERSION='39 (Cloud Edition)' 00:17:35.967 ++ ID=fedora 00:17:35.967 ++ VERSION_ID=39 00:17:35.967 ++ VERSION_CODENAME= 00:17:35.967 ++ PLATFORM_ID=platform:f39 00:17:35.967 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:17:35.967 ++ ANSI_COLOR='0;38;2;60;110;180' 00:17:35.967 ++ LOGO=fedora-logo-icon 00:17:35.967 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:17:35.967 ++ HOME_URL=https://fedoraproject.org/ 00:17:35.967 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:17:35.967 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:17:35.967 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:17:35.967 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:17:35.967 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:17:35.967 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:17:35.967 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:17:35.967 ++ SUPPORT_END=2024-11-12 00:17:35.967 ++ VARIANT='Cloud Edition' 00:17:35.967 ++ VARIANT_ID=cloud 00:17:35.967 + uname -a 00:17:35.967 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:17:35.967 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:36.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.532 Hugepages 00:17:36.532 node hugesize free / total 00:17:36.532 node0 1048576kB 0 / 0 00:17:36.532 node0 2048kB 0 / 0 00:17:36.532 00:17:36.532 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:36.532 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:36.532 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:36.532 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:36.532 + rm -f /tmp/spdk-ld-path 00:17:36.532 + source autorun-spdk.conf 00:17:36.532 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:36.532 ++ SPDK_TEST_NVMF=1 00:17:36.532 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:17:36.532 ++ SPDK_TEST_USDT=1 00:17:36.532 ++ SPDK_TEST_NVMF_MDNS=1 00:17:36.532 ++ SPDK_RUN_UBSAN=1 00:17:36.532 ++ NET_TYPE=virt 00:17:36.532 ++ SPDK_JSONRPC_GO_CLIENT=1 00:17:36.532 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:36.532 ++ RUN_NIGHTLY=0 00:17:36.532 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:17:36.532 + [[ -n '' ]] 00:17:36.532 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:17:36.532 + for M in /var/spdk/build-*-manifest.txt 00:17:36.532 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:17:36.532 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:36.532 + for M in /var/spdk/build-*-manifest.txt 00:17:36.532 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:17:36.532 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:36.532 + for M in /var/spdk/build-*-manifest.txt 00:17:36.532 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:17:36.532 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:36.532 ++ uname 00:17:36.532 + [[ Linux == \L\i\n\u\x ]] 00:17:36.532 + sudo dmesg -T 00:17:36.532 + sudo dmesg --clear 00:17:36.532 + dmesg_pid=5254 00:17:36.532 + [[ Fedora Linux == FreeBSD ]] 00:17:36.532 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:36.532 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:36.532 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:17:36.532 + [[ -x /usr/src/fio-static/fio ]] 00:17:36.532 + sudo dmesg -Tw 00:17:36.532 + export FIO_BIN=/usr/src/fio-static/fio 00:17:36.532 + FIO_BIN=/usr/src/fio-static/fio 00:17:36.532 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:17:36.532 + [[ ! -v VFIO_QEMU_BIN ]] 00:17:36.532 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:17:36.532 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:36.532 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:36.532 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:17:36.532 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:36.532 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:36.532 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:36.790 11:01:01 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:17:36.790 11:01:01 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:17:36.790 11:01:01 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:36.791 11:01:01 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:17:36.791 11:01:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:17:36.791 11:01:01 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:36.791 11:01:01 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:17:36.791 11:01:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.791 11:01:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:36.791 11:01:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:36.791 11:01:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.791 11:01:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.791 11:01:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.791 11:01:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.791 11:01:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.791 11:01:01 -- paths/export.sh@5 -- $ export PATH 00:17:36.791 11:01:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.791 11:01:01 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:36.791 11:01:01 -- common/autobuild_common.sh@493 -- $ date +%s 00:17:36.791 11:01:01 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733396461.XXXXXX 00:17:36.791 11:01:01 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733396461.wdRSUH 00:17:36.791 11:01:01 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:17:36.791 11:01:01 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:17:36.791 11:01:01 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:17:36.791 11:01:01 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:36.791 11:01:01 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:36.791 11:01:01 -- common/autobuild_common.sh@509 -- $ get_config_params 00:17:36.791 11:01:01 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:17:36.791 11:01:01 -- common/autotest_common.sh@10 -- $ set +x 00:17:36.791 11:01:01 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:17:36.791 11:01:01 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:17:36.791 11:01:01 -- pm/common@17 -- $ local monitor 00:17:36.791 11:01:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:36.791 11:01:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:36.791 11:01:01 -- pm/common@21 -- $ date +%s 00:17:36.791 11:01:01 -- pm/common@25 -- $ sleep 1 00:17:36.791 11:01:01 -- pm/common@21 -- $ date +%s 00:17:36.791 11:01:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733396461 00:17:36.791 11:01:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733396461 00:17:36.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733396461_collect-vmstat.pm.log 00:17:36.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733396461_collect-cpu-load.pm.log 00:17:37.727 11:01:02 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:17:37.727 11:01:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:17:37.727 11:01:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:17:37.727 11:01:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:37.727 11:01:02 -- spdk/autobuild.sh@16 -- $ date -u 00:17:37.727 Thu Dec 5 11:01:02 AM UTC 2024 00:17:37.727 11:01:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:17:37.727 v25.01-pre-299-g688351e0e 00:17:37.727 11:01:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:17:37.727 11:01:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:17:37.727 11:01:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:17:37.727 11:01:02 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:17:37.727 11:01:02 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:17:37.727 11:01:02 -- common/autotest_common.sh@10 -- $ set +x 00:17:37.727 ************************************ 00:17:37.727 START TEST ubsan 00:17:37.727 ************************************ 00:17:37.727 using ubsan 00:17:37.727 11:01:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:17:37.727 00:17:37.727 real 0m0.000s 00:17:37.727 user 0m0.000s 00:17:37.727 sys 0m0.000s 00:17:37.727 11:01:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:37.727 11:01:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:17:37.727 ************************************ 00:17:37.727 END TEST ubsan 00:17:37.727 ************************************ 00:17:37.987 11:01:02 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:17:37.987 11:01:02 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:17:37.987 11:01:02 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:17:37.987 11:01:02 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:17:37.987 11:01:02 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:17:37.987 11:01:02 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:17:37.987 11:01:02 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:17:37.987 11:01:02 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:17:37.987 11:01:02 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:17:37.987 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:37.987 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:38.553 Using 'verbs' RDMA provider 00:17:54.365 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:18:06.560 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:18:06.560 go version go1.21.1 linux/amd64 00:18:06.560 Creating mk/config.mk...done. 00:18:06.560 Creating mk/cc.flags.mk...done. 00:18:06.560 Type 'make' to build. 00:18:06.560 11:01:31 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:18:06.560 11:01:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:18:06.560 11:01:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:18:06.560 11:01:31 -- common/autotest_common.sh@10 -- $ set +x 00:18:06.560 ************************************ 00:18:06.560 START TEST make 00:18:06.560 ************************************ 00:18:06.560 11:01:31 make -- common/autotest_common.sh@1129 -- $ make -j10 00:18:07.127 make[1]: Nothing to be done for 'all'. 00:18:25.196 The Meson build system 00:18:25.196 Version: 1.5.0 00:18:25.196 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:18:25.196 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:18:25.196 Build type: native build 00:18:25.196 Program cat found: YES (/usr/bin/cat) 00:18:25.196 Project name: DPDK 00:18:25.196 Project version: 24.03.0 00:18:25.196 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:18:25.196 C linker for the host machine: cc ld.bfd 2.40-14 00:18:25.196 Host machine cpu family: x86_64 00:18:25.196 Host machine cpu: x86_64 00:18:25.196 Message: ## Building in Developer Mode ## 00:18:25.196 Program pkg-config found: YES (/usr/bin/pkg-config) 00:18:25.196 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:18:25.196 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:18:25.196 Program python3 found: YES (/usr/bin/python3) 00:18:25.196 Program cat found: YES (/usr/bin/cat) 00:18:25.196 Compiler for C supports arguments -march=native: YES 00:18:25.196 Checking for size of "void *" : 8 00:18:25.196 Checking for size of "void *" : 8 (cached) 00:18:25.196 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:18:25.196 Library m found: YES 00:18:25.196 Library numa found: YES 00:18:25.197 Has header "numaif.h" : YES 00:18:25.197 Library fdt found: NO 00:18:25.197 Library execinfo found: NO 00:18:25.197 Has header "execinfo.h" : YES 00:18:25.197 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:18:25.197 Run-time dependency libarchive found: NO (tried pkgconfig) 00:18:25.197 Run-time dependency libbsd found: NO (tried pkgconfig) 00:18:25.197 Run-time dependency jansson found: NO (tried pkgconfig) 00:18:25.197 Run-time dependency openssl found: YES 3.1.1 00:18:25.197 Run-time dependency libpcap found: YES 1.10.4 00:18:25.197 Has header "pcap.h" with dependency libpcap: YES 00:18:25.197 Compiler for C supports arguments -Wcast-qual: YES 00:18:25.197 Compiler for C supports arguments -Wdeprecated: YES 00:18:25.197 Compiler for C supports arguments -Wformat: YES 00:18:25.197 Compiler for C supports arguments -Wformat-nonliteral: NO 00:18:25.197 Compiler for C supports arguments -Wformat-security: NO 00:18:25.197 Compiler for C supports arguments -Wmissing-declarations: YES 00:18:25.197 Compiler for C supports arguments -Wmissing-prototypes: YES 00:18:25.197 Compiler for C supports arguments -Wnested-externs: YES 00:18:25.197 Compiler for C supports arguments -Wold-style-definition: YES 00:18:25.197 Compiler for C supports arguments -Wpointer-arith: YES 00:18:25.197 Compiler for C supports arguments -Wsign-compare: YES 00:18:25.197 Compiler for C supports arguments -Wstrict-prototypes: YES 00:18:25.197 Compiler for C supports arguments -Wundef: YES 00:18:25.197 Compiler for C supports arguments -Wwrite-strings: YES 00:18:25.197 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:18:25.197 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:18:25.197 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:18:25.197 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:18:25.197 Program objdump found: YES (/usr/bin/objdump) 00:18:25.197 Compiler for C supports arguments -mavx512f: YES 00:18:25.197 Checking if "AVX512 checking" compiles: YES 00:18:25.197 Fetching value of define "__SSE4_2__" : 1 00:18:25.197 Fetching value of define "__AES__" : 1 00:18:25.197 Fetching value of define "__AVX__" : 1 00:18:25.197 Fetching value of define "__AVX2__" : 1 00:18:25.197 Fetching value of define "__AVX512BW__" : 1 00:18:25.197 Fetching value of define "__AVX512CD__" : 1 00:18:25.197 Fetching value of define "__AVX512DQ__" : 1 00:18:25.197 Fetching value of define "__AVX512F__" : 1 00:18:25.197 Fetching value of define "__AVX512VL__" : 1 00:18:25.197 Fetching value of define "__PCLMUL__" : 1 00:18:25.197 Fetching value of define "__RDRND__" : 1 00:18:25.197 Fetching value of define "__RDSEED__" : 1 00:18:25.197 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:18:25.197 Fetching value of define "__znver1__" : (undefined) 00:18:25.197 Fetching value of define "__znver2__" : (undefined) 00:18:25.197 Fetching value of define "__znver3__" : (undefined) 00:18:25.197 Fetching value of define "__znver4__" : (undefined) 00:18:25.197 Compiler for C supports arguments -Wno-format-truncation: YES 00:18:25.197 Message: lib/log: Defining dependency "log" 00:18:25.197 Message: lib/kvargs: Defining dependency "kvargs" 00:18:25.197 Message: lib/telemetry: Defining dependency "telemetry" 00:18:25.197 Checking for function "getentropy" : NO 00:18:25.197 Message: lib/eal: Defining dependency "eal" 00:18:25.197 Message: lib/ring: Defining dependency "ring" 00:18:25.197 Message: lib/rcu: Defining dependency "rcu" 00:18:25.197 Message: lib/mempool: Defining dependency "mempool" 00:18:25.197 Message: lib/mbuf: Defining dependency "mbuf" 00:18:25.197 Fetching value of define "__PCLMUL__" : 1 (cached) 00:18:25.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:18:25.197 Fetching value of define "__AVX512BW__" : 1 (cached) 00:18:25.197 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:18:25.197 Fetching value of define "__AVX512VL__" : 1 (cached) 00:18:25.197 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:18:25.197 Compiler for C supports arguments -mpclmul: YES 00:18:25.197 Compiler for C supports arguments -maes: YES 00:18:25.197 Compiler for C supports arguments -mavx512f: YES (cached) 00:18:25.197 Compiler for C supports arguments -mavx512bw: YES 00:18:25.197 Compiler for C supports arguments -mavx512dq: YES 00:18:25.197 Compiler for C supports arguments -mavx512vl: YES 00:18:25.197 Compiler for C supports arguments -mvpclmulqdq: YES 00:18:25.197 Compiler for C supports arguments -mavx2: YES 00:18:25.197 Compiler for C supports arguments -mavx: YES 00:18:25.197 Message: lib/net: Defining dependency "net" 00:18:25.197 Message: lib/meter: Defining dependency "meter" 00:18:25.197 Message: lib/ethdev: Defining dependency "ethdev" 00:18:25.197 Message: lib/pci: Defining dependency "pci" 00:18:25.197 Message: lib/cmdline: Defining dependency "cmdline" 00:18:25.197 Message: lib/hash: Defining dependency "hash" 00:18:25.197 Message: lib/timer: Defining dependency "timer" 00:18:25.197 Message: lib/compressdev: Defining dependency "compressdev" 00:18:25.197 Message: lib/cryptodev: Defining dependency "cryptodev" 00:18:25.197 Message: lib/dmadev: Defining dependency "dmadev" 00:18:25.197 Compiler for C supports arguments -Wno-cast-qual: YES 00:18:25.197 Message: lib/power: Defining dependency "power" 00:18:25.197 Message: lib/reorder: Defining dependency "reorder" 00:18:25.197 Message: lib/security: Defining dependency "security" 00:18:25.197 Has header "linux/userfaultfd.h" : YES 00:18:25.197 Has header "linux/vduse.h" : YES 00:18:25.197 Message: lib/vhost: Defining dependency "vhost" 00:18:25.197 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:18:25.197 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:18:25.197 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:18:25.197 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:18:25.197 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:18:25.197 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:18:25.197 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:18:25.197 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:18:25.197 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:18:25.197 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:18:25.197 Program doxygen found: YES (/usr/local/bin/doxygen) 00:18:25.197 Configuring doxy-api-html.conf using configuration 00:18:25.197 Configuring doxy-api-man.conf using configuration 00:18:25.197 Program mandb found: YES (/usr/bin/mandb) 00:18:25.197 Program sphinx-build found: NO 00:18:25.197 Configuring rte_build_config.h using configuration 00:18:25.197 Message: 00:18:25.197 ================= 00:18:25.197 Applications Enabled 00:18:25.197 ================= 00:18:25.197 00:18:25.197 apps: 00:18:25.197 00:18:25.197 00:18:25.197 Message: 00:18:25.197 ================= 00:18:25.197 Libraries Enabled 00:18:25.197 ================= 00:18:25.197 00:18:25.197 libs: 00:18:25.197 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:18:25.197 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:18:25.197 cryptodev, dmadev, power, reorder, security, vhost, 00:18:25.197 00:18:25.197 Message: 00:18:25.197 =============== 00:18:25.197 Drivers Enabled 00:18:25.197 =============== 00:18:25.197 00:18:25.197 common: 00:18:25.197 00:18:25.197 bus: 00:18:25.197 pci, vdev, 00:18:25.197 mempool: 00:18:25.197 ring, 00:18:25.197 dma: 00:18:25.197 00:18:25.197 net: 00:18:25.197 00:18:25.197 crypto: 00:18:25.197 00:18:25.197 compress: 00:18:25.197 00:18:25.197 vdpa: 00:18:25.197 00:18:25.197 00:18:25.197 Message: 00:18:25.197 ================= 00:18:25.197 Content Skipped 00:18:25.197 ================= 00:18:25.197 00:18:25.197 apps: 00:18:25.197 dumpcap: explicitly disabled via build config 00:18:25.197 graph: explicitly disabled via build config 00:18:25.197 pdump: explicitly disabled via build config 00:18:25.197 proc-info: explicitly disabled via build config 00:18:25.197 test-acl: explicitly disabled via build config 00:18:25.197 test-bbdev: explicitly disabled via build config 00:18:25.197 test-cmdline: explicitly disabled via build config 00:18:25.197 test-compress-perf: explicitly disabled via build config 00:18:25.197 test-crypto-perf: explicitly disabled via build config 00:18:25.197 test-dma-perf: explicitly disabled via build config 00:18:25.197 test-eventdev: explicitly disabled via build config 00:18:25.197 test-fib: explicitly disabled via build config 00:18:25.197 test-flow-perf: explicitly disabled via build config 00:18:25.197 test-gpudev: explicitly disabled via build config 00:18:25.197 test-mldev: explicitly disabled via build config 00:18:25.197 test-pipeline: explicitly disabled via build config 00:18:25.197 test-pmd: explicitly disabled via build config 00:18:25.197 test-regex: explicitly disabled via build config 00:18:25.197 test-sad: explicitly disabled via build config 00:18:25.197 test-security-perf: explicitly disabled via build config 00:18:25.197 00:18:25.197 libs: 00:18:25.197 argparse: explicitly disabled via build config 00:18:25.197 metrics: explicitly disabled via build config 00:18:25.197 acl: explicitly disabled via build config 00:18:25.197 bbdev: explicitly disabled via build config 00:18:25.197 bitratestats: explicitly disabled via build config 00:18:25.197 bpf: explicitly disabled via build config 00:18:25.197 cfgfile: explicitly disabled via build config 00:18:25.197 distributor: explicitly disabled via build config 00:18:25.197 efd: explicitly disabled via build config 00:18:25.197 eventdev: explicitly disabled via build config 00:18:25.197 dispatcher: explicitly disabled via build config 00:18:25.197 gpudev: explicitly disabled via build config 00:18:25.197 gro: explicitly disabled via build config 00:18:25.197 gso: explicitly disabled via build config 00:18:25.197 ip_frag: explicitly disabled via build config 00:18:25.197 jobstats: explicitly disabled via build config 00:18:25.197 latencystats: explicitly disabled via build config 00:18:25.197 lpm: explicitly disabled via build config 00:18:25.197 member: explicitly disabled via build config 00:18:25.197 pcapng: explicitly disabled via build config 00:18:25.198 rawdev: explicitly disabled via build config 00:18:25.198 regexdev: explicitly disabled via build config 00:18:25.198 mldev: explicitly disabled via build config 00:18:25.198 rib: explicitly disabled via build config 00:18:25.198 sched: explicitly disabled via build config 00:18:25.198 stack: explicitly disabled via build config 00:18:25.198 ipsec: explicitly disabled via build config 00:18:25.198 pdcp: explicitly disabled via build config 00:18:25.198 fib: explicitly disabled via build config 00:18:25.198 port: explicitly disabled via build config 00:18:25.198 pdump: explicitly disabled via build config 00:18:25.198 table: explicitly disabled via build config 00:18:25.198 pipeline: explicitly disabled via build config 00:18:25.198 graph: explicitly disabled via build config 00:18:25.198 node: explicitly disabled via build config 00:18:25.198 00:18:25.198 drivers: 00:18:25.198 common/cpt: not in enabled drivers build config 00:18:25.198 common/dpaax: not in enabled drivers build config 00:18:25.198 common/iavf: not in enabled drivers build config 00:18:25.198 common/idpf: not in enabled drivers build config 00:18:25.198 common/ionic: not in enabled drivers build config 00:18:25.198 common/mvep: not in enabled drivers build config 00:18:25.198 common/octeontx: not in enabled drivers build config 00:18:25.198 bus/auxiliary: not in enabled drivers build config 00:18:25.198 bus/cdx: not in enabled drivers build config 00:18:25.198 bus/dpaa: not in enabled drivers build config 00:18:25.198 bus/fslmc: not in enabled drivers build config 00:18:25.198 bus/ifpga: not in enabled drivers build config 00:18:25.198 bus/platform: not in enabled drivers build config 00:18:25.198 bus/uacce: not in enabled drivers build config 00:18:25.198 bus/vmbus: not in enabled drivers build config 00:18:25.198 common/cnxk: not in enabled drivers build config 00:18:25.198 common/mlx5: not in enabled drivers build config 00:18:25.198 common/nfp: not in enabled drivers build config 00:18:25.198 common/nitrox: not in enabled drivers build config 00:18:25.198 common/qat: not in enabled drivers build config 00:18:25.198 common/sfc_efx: not in enabled drivers build config 00:18:25.198 mempool/bucket: not in enabled drivers build config 00:18:25.198 mempool/cnxk: not in enabled drivers build config 00:18:25.198 mempool/dpaa: not in enabled drivers build config 00:18:25.198 mempool/dpaa2: not in enabled drivers build config 00:18:25.198 mempool/octeontx: not in enabled drivers build config 00:18:25.198 mempool/stack: not in enabled drivers build config 00:18:25.198 dma/cnxk: not in enabled drivers build config 00:18:25.198 dma/dpaa: not in enabled drivers build config 00:18:25.198 dma/dpaa2: not in enabled drivers build config 00:18:25.198 dma/hisilicon: not in enabled drivers build config 00:18:25.198 dma/idxd: not in enabled drivers build config 00:18:25.198 dma/ioat: not in enabled drivers build config 00:18:25.198 dma/skeleton: not in enabled drivers build config 00:18:25.198 net/af_packet: not in enabled drivers build config 00:18:25.198 net/af_xdp: not in enabled drivers build config 00:18:25.198 net/ark: not in enabled drivers build config 00:18:25.198 net/atlantic: not in enabled drivers build config 00:18:25.198 net/avp: not in enabled drivers build config 00:18:25.198 net/axgbe: not in enabled drivers build config 00:18:25.198 net/bnx2x: not in enabled drivers build config 00:18:25.198 net/bnxt: not in enabled drivers build config 00:18:25.198 net/bonding: not in enabled drivers build config 00:18:25.198 net/cnxk: not in enabled drivers build config 00:18:25.198 net/cpfl: not in enabled drivers build config 00:18:25.198 net/cxgbe: not in enabled drivers build config 00:18:25.198 net/dpaa: not in enabled drivers build config 00:18:25.198 net/dpaa2: not in enabled drivers build config 00:18:25.198 net/e1000: not in enabled drivers build config 00:18:25.198 net/ena: not in enabled drivers build config 00:18:25.198 net/enetc: not in enabled drivers build config 00:18:25.198 net/enetfec: not in enabled drivers build config 00:18:25.198 net/enic: not in enabled drivers build config 00:18:25.198 net/failsafe: not in enabled drivers build config 00:18:25.198 net/fm10k: not in enabled drivers build config 00:18:25.198 net/gve: not in enabled drivers build config 00:18:25.198 net/hinic: not in enabled drivers build config 00:18:25.198 net/hns3: not in enabled drivers build config 00:18:25.198 net/i40e: not in enabled drivers build config 00:18:25.198 net/iavf: not in enabled drivers build config 00:18:25.198 net/ice: not in enabled drivers build config 00:18:25.198 net/idpf: not in enabled drivers build config 00:18:25.198 net/igc: not in enabled drivers build config 00:18:25.198 net/ionic: not in enabled drivers build config 00:18:25.198 net/ipn3ke: not in enabled drivers build config 00:18:25.198 net/ixgbe: not in enabled drivers build config 00:18:25.198 net/mana: not in enabled drivers build config 00:18:25.198 net/memif: not in enabled drivers build config 00:18:25.198 net/mlx4: not in enabled drivers build config 00:18:25.198 net/mlx5: not in enabled drivers build config 00:18:25.198 net/mvneta: not in enabled drivers build config 00:18:25.198 net/mvpp2: not in enabled drivers build config 00:18:25.198 net/netvsc: not in enabled drivers build config 00:18:25.198 net/nfb: not in enabled drivers build config 00:18:25.198 net/nfp: not in enabled drivers build config 00:18:25.198 net/ngbe: not in enabled drivers build config 00:18:25.198 net/null: not in enabled drivers build config 00:18:25.198 net/octeontx: not in enabled drivers build config 00:18:25.198 net/octeon_ep: not in enabled drivers build config 00:18:25.198 net/pcap: not in enabled drivers build config 00:18:25.198 net/pfe: not in enabled drivers build config 00:18:25.198 net/qede: not in enabled drivers build config 00:18:25.198 net/ring: not in enabled drivers build config 00:18:25.198 net/sfc: not in enabled drivers build config 00:18:25.198 net/softnic: not in enabled drivers build config 00:18:25.198 net/tap: not in enabled drivers build config 00:18:25.198 net/thunderx: not in enabled drivers build config 00:18:25.198 net/txgbe: not in enabled drivers build config 00:18:25.198 net/vdev_netvsc: not in enabled drivers build config 00:18:25.198 net/vhost: not in enabled drivers build config 00:18:25.198 net/virtio: not in enabled drivers build config 00:18:25.198 net/vmxnet3: not in enabled drivers build config 00:18:25.198 raw/*: missing internal dependency, "rawdev" 00:18:25.198 crypto/armv8: not in enabled drivers build config 00:18:25.198 crypto/bcmfs: not in enabled drivers build config 00:18:25.198 crypto/caam_jr: not in enabled drivers build config 00:18:25.198 crypto/ccp: not in enabled drivers build config 00:18:25.198 crypto/cnxk: not in enabled drivers build config 00:18:25.198 crypto/dpaa_sec: not in enabled drivers build config 00:18:25.198 crypto/dpaa2_sec: not in enabled drivers build config 00:18:25.198 crypto/ipsec_mb: not in enabled drivers build config 00:18:25.198 crypto/mlx5: not in enabled drivers build config 00:18:25.198 crypto/mvsam: not in enabled drivers build config 00:18:25.198 crypto/nitrox: not in enabled drivers build config 00:18:25.198 crypto/null: not in enabled drivers build config 00:18:25.198 crypto/octeontx: not in enabled drivers build config 00:18:25.198 crypto/openssl: not in enabled drivers build config 00:18:25.198 crypto/scheduler: not in enabled drivers build config 00:18:25.198 crypto/uadk: not in enabled drivers build config 00:18:25.198 crypto/virtio: not in enabled drivers build config 00:18:25.198 compress/isal: not in enabled drivers build config 00:18:25.198 compress/mlx5: not in enabled drivers build config 00:18:25.198 compress/nitrox: not in enabled drivers build config 00:18:25.198 compress/octeontx: not in enabled drivers build config 00:18:25.198 compress/zlib: not in enabled drivers build config 00:18:25.198 regex/*: missing internal dependency, "regexdev" 00:18:25.198 ml/*: missing internal dependency, "mldev" 00:18:25.198 vdpa/ifc: not in enabled drivers build config 00:18:25.198 vdpa/mlx5: not in enabled drivers build config 00:18:25.198 vdpa/nfp: not in enabled drivers build config 00:18:25.198 vdpa/sfc: not in enabled drivers build config 00:18:25.198 event/*: missing internal dependency, "eventdev" 00:18:25.198 baseband/*: missing internal dependency, "bbdev" 00:18:25.198 gpu/*: missing internal dependency, "gpudev" 00:18:25.198 00:18:25.198 00:18:25.198 Build targets in project: 85 00:18:25.198 00:18:25.198 DPDK 24.03.0 00:18:25.198 00:18:25.198 User defined options 00:18:25.198 buildtype : debug 00:18:25.198 default_library : shared 00:18:25.198 libdir : lib 00:18:25.198 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:25.198 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:18:25.198 c_link_args : 00:18:25.198 cpu_instruction_set: native 00:18:25.198 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:18:25.198 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:18:25.198 enable_docs : false 00:18:25.198 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:18:25.198 enable_kmods : false 00:18:25.198 max_lcores : 128 00:18:25.198 tests : false 00:18:25.198 00:18:25.198 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:18:25.198 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:18:25.198 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:18:25.198 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:18:25.198 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:18:25.198 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:18:25.198 [5/268] Linking static target lib/librte_kvargs.a 00:18:25.198 [6/268] Linking static target lib/librte_log.a 00:18:25.198 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:18:25.457 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:18:25.457 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:18:25.457 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:18:25.457 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:18:25.457 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:18:25.457 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:18:25.457 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:18:25.457 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:18:25.457 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:18:25.714 [17/268] Linking static target lib/librte_telemetry.a 00:18:25.714 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:18:25.972 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:18:25.972 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:18:25.972 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:18:25.972 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:18:25.972 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:18:26.229 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:18:26.229 [25/268] Linking target lib/librte_log.so.24.1 00:18:26.229 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:18:26.229 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:18:26.229 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:18:26.485 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:18:26.485 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:18:26.485 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:18:26.485 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:18:26.485 [33/268] Linking target lib/librte_kvargs.so.24.1 00:18:26.742 [34/268] Linking target lib/librte_telemetry.so.24.1 00:18:26.742 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:18:26.742 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:18:26.742 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:18:26.742 [38/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:18:27.004 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:18:27.005 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:18:27.005 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:18:27.005 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:18:27.005 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:18:27.005 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:18:27.005 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:18:27.282 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:18:27.282 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:18:27.282 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:18:27.282 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:18:27.539 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:18:27.539 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:18:27.796 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:18:27.796 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:18:27.796 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:18:27.796 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:18:27.796 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:18:28.053 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:18:28.311 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:18:28.311 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:18:28.311 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:18:28.311 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:18:28.311 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:18:28.311 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:18:28.568 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:18:28.568 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:18:28.825 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:18:28.825 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:18:29.083 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:18:29.083 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:18:29.083 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:18:29.083 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:18:29.083 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:18:29.083 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:18:29.387 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:18:29.387 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:18:29.387 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:18:29.387 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:18:29.387 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:18:29.387 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:18:29.644 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:18:29.644 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:18:29.901 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:18:29.901 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:18:29.901 [84/268] Linking static target lib/librte_ring.a 00:18:29.901 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:18:29.901 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:18:29.901 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:18:29.901 [88/268] Linking static target lib/librte_rcu.a 00:18:29.901 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:18:29.901 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:18:29.901 [91/268] Linking static target lib/librte_eal.a 00:18:29.901 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:18:30.158 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:18:30.158 [94/268] Linking static target lib/librte_mempool.a 00:18:30.423 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:18:30.423 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:18:30.423 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:18:30.423 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:18:30.423 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:18:30.423 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:18:30.679 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:18:30.679 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:18:30.679 [103/268] Linking static target lib/librte_mbuf.a 00:18:30.679 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:18:30.936 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:18:30.936 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:18:30.936 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:18:31.193 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:18:31.193 [109/268] Linking static target lib/librte_meter.a 00:18:31.193 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:18:31.193 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:18:31.193 [112/268] Linking static target lib/librte_net.a 00:18:31.451 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:18:31.451 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:18:31.709 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:18:31.709 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:18:31.709 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:18:31.709 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:18:31.967 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:18:31.967 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:18:32.225 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:18:32.225 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:18:32.225 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:18:32.483 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:18:32.483 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:18:32.483 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:18:32.483 [127/268] Linking static target lib/librte_pci.a 00:18:32.741 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:18:32.741 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:18:32.741 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:18:32.741 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:18:32.741 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:18:32.741 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:18:32.999 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:18:32.999 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:18:32.999 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:18:32.999 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:18:32.999 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:18:32.999 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:18:32.999 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:32.999 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:18:33.290 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:18:33.290 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:18:33.290 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:18:33.290 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:18:33.290 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:18:33.290 [147/268] Linking static target lib/librte_cmdline.a 00:18:33.290 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:18:33.570 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:18:33.570 [150/268] Linking static target lib/librte_ethdev.a 00:18:33.570 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:18:33.827 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:18:33.827 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:18:33.827 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:18:33.827 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:18:33.827 [156/268] Linking static target lib/librte_timer.a 00:18:34.085 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:18:34.085 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:18:34.085 [159/268] Linking static target lib/librte_compressdev.a 00:18:34.085 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:18:34.343 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:18:34.343 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:18:34.343 [163/268] Linking static target lib/librte_hash.a 00:18:34.343 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:18:34.601 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:18:34.601 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:18:34.601 [167/268] Linking static target lib/librte_dmadev.a 00:18:34.601 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:18:34.859 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:18:34.859 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:18:34.859 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:18:34.859 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:18:34.859 [173/268] Linking static target lib/librte_cryptodev.a 00:18:35.119 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:18:35.377 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:18:35.377 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:35.377 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:18:35.377 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:18:35.635 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:18:35.635 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:35.635 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:18:35.892 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:18:35.892 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:18:35.892 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:18:35.892 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:18:35.892 [186/268] Linking static target lib/librte_power.a 00:18:36.149 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:18:36.149 [188/268] Linking static target lib/librte_reorder.a 00:18:36.149 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:18:36.421 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:18:36.421 [191/268] Linking static target lib/librte_security.a 00:18:36.683 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:18:36.683 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:18:36.683 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:18:36.941 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:18:37.198 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:18:37.456 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:18:37.456 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:18:37.456 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:18:37.456 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:18:37.456 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:18:37.713 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:18:37.971 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:18:37.971 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:37.971 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:18:37.971 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:18:37.971 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:18:38.229 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:18:38.229 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:18:38.229 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:18:38.229 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:18:38.486 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:18:38.487 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:18:38.487 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:18:38.487 [215/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:18:38.487 [216/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:18:38.487 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:38.487 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:38.487 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:38.487 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:38.487 [221/268] Linking static target drivers/librte_bus_vdev.a 00:18:38.487 [222/268] Linking static target drivers/librte_bus_pci.a 00:18:38.745 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:18:38.745 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:38.745 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:38.745 [226/268] Linking static target drivers/librte_mempool_ring.a 00:18:39.005 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:39.296 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:39.862 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:18:40.120 [230/268] Linking static target lib/librte_vhost.a 00:18:41.490 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:18:42.077 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:42.077 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:18:42.077 [234/268] Linking target lib/librte_eal.so.24.1 00:18:42.335 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:18:42.335 [236/268] Linking target lib/librte_timer.so.24.1 00:18:42.335 [237/268] Linking target lib/librte_dmadev.so.24.1 00:18:42.335 [238/268] Linking target lib/librte_pci.so.24.1 00:18:42.335 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:18:42.335 [240/268] Linking target lib/librte_ring.so.24.1 00:18:42.335 [241/268] Linking target lib/librte_meter.so.24.1 00:18:42.335 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:18:42.335 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:18:42.335 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:18:42.594 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:18:42.594 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:18:42.594 [247/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:18:42.594 [248/268] Linking target lib/librte_mempool.so.24.1 00:18:42.594 [249/268] Linking target lib/librte_rcu.so.24.1 00:18:42.853 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:18:42.853 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:18:42.853 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:18:42.853 [253/268] Linking target lib/librte_mbuf.so.24.1 00:18:42.853 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:18:43.110 [255/268] Linking target lib/librte_compressdev.so.24.1 00:18:43.110 [256/268] Linking target lib/librte_net.so.24.1 00:18:43.110 [257/268] Linking target lib/librte_reorder.so.24.1 00:18:43.110 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:18:43.110 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:18:43.110 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:18:43.110 [261/268] Linking target lib/librte_hash.so.24.1 00:18:43.110 [262/268] Linking target lib/librte_cmdline.so.24.1 00:18:43.368 [263/268] Linking target lib/librte_ethdev.so.24.1 00:18:43.368 [264/268] Linking target lib/librte_security.so.24.1 00:18:43.368 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:18:43.368 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:18:43.368 [267/268] Linking target lib/librte_power.so.24.1 00:18:43.626 [268/268] Linking target lib/librte_vhost.so.24.1 00:18:43.627 INFO: autodetecting backend as ninja 00:18:43.627 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:19:15.854 CC lib/ut/ut.o 00:19:15.854 CC lib/log/log.o 00:19:15.854 CC lib/log/log_flags.o 00:19:15.854 CC lib/log/log_deprecated.o 00:19:15.854 CC lib/ut_mock/mock.o 00:19:15.854 LIB libspdk_ut_mock.a 00:19:15.854 LIB libspdk_ut.a 00:19:15.854 LIB libspdk_log.a 00:19:15.854 SO libspdk_ut_mock.so.6.0 00:19:15.854 SO libspdk_ut.so.2.0 00:19:15.854 SO libspdk_log.so.7.1 00:19:15.854 SYMLINK libspdk_ut.so 00:19:15.854 SYMLINK libspdk_ut_mock.so 00:19:15.854 SYMLINK libspdk_log.so 00:19:15.854 CC lib/dma/dma.o 00:19:15.854 CC lib/ioat/ioat.o 00:19:15.854 CC lib/util/base64.o 00:19:15.854 CC lib/util/bit_array.o 00:19:15.854 CC lib/util/cpuset.o 00:19:15.854 CC lib/util/crc32.o 00:19:15.854 CC lib/util/crc16.o 00:19:15.854 CC lib/util/crc32c.o 00:19:15.854 CXX lib/trace_parser/trace.o 00:19:15.854 CC lib/vfio_user/host/vfio_user_pci.o 00:19:15.854 CC lib/util/crc32_ieee.o 00:19:15.854 CC lib/util/crc64.o 00:19:15.854 LIB libspdk_dma.a 00:19:15.854 CC lib/util/dif.o 00:19:15.854 LIB libspdk_ioat.a 00:19:15.854 SO libspdk_dma.so.5.0 00:19:15.854 CC lib/util/fd.o 00:19:15.854 SO libspdk_ioat.so.7.0 00:19:15.854 SYMLINK libspdk_dma.so 00:19:15.854 CC lib/util/fd_group.o 00:19:15.854 CC lib/vfio_user/host/vfio_user.o 00:19:15.854 CC lib/util/file.o 00:19:15.854 CC lib/util/hexlify.o 00:19:15.854 SYMLINK libspdk_ioat.so 00:19:15.854 CC lib/util/iov.o 00:19:15.854 CC lib/util/math.o 00:19:15.854 CC lib/util/net.o 00:19:15.854 CC lib/util/pipe.o 00:19:15.854 CC lib/util/strerror_tls.o 00:19:15.854 CC lib/util/string.o 00:19:15.854 CC lib/util/uuid.o 00:19:15.854 LIB libspdk_vfio_user.a 00:19:15.854 SO libspdk_vfio_user.so.5.0 00:19:15.854 CC lib/util/xor.o 00:19:15.854 CC lib/util/zipf.o 00:19:15.854 CC lib/util/md5.o 00:19:15.854 SYMLINK libspdk_vfio_user.so 00:19:15.854 LIB libspdk_util.a 00:19:15.854 LIB libspdk_trace_parser.a 00:19:15.854 SO libspdk_trace_parser.so.6.0 00:19:15.854 SO libspdk_util.so.10.1 00:19:15.854 SYMLINK libspdk_trace_parser.so 00:19:15.854 SYMLINK libspdk_util.so 00:19:15.854 CC lib/rdma_utils/rdma_utils.o 00:19:15.854 CC lib/vmd/led.o 00:19:15.854 CC lib/vmd/vmd.o 00:19:15.854 CC lib/conf/conf.o 00:19:15.854 CC lib/idxd/idxd.o 00:19:15.854 CC lib/json/json_parse.o 00:19:15.854 CC lib/idxd/idxd_kernel.o 00:19:15.854 CC lib/idxd/idxd_user.o 00:19:15.854 CC lib/json/json_util.o 00:19:15.854 CC lib/env_dpdk/env.o 00:19:15.854 CC lib/env_dpdk/memory.o 00:19:15.854 CC lib/env_dpdk/pci.o 00:19:15.854 CC lib/json/json_write.o 00:19:15.854 CC lib/env_dpdk/init.o 00:19:15.854 LIB libspdk_rdma_utils.a 00:19:15.854 LIB libspdk_conf.a 00:19:15.854 SO libspdk_rdma_utils.so.1.0 00:19:15.854 SO libspdk_conf.so.6.0 00:19:15.854 SYMLINK libspdk_rdma_utils.so 00:19:15.854 CC lib/env_dpdk/threads.o 00:19:15.854 SYMLINK libspdk_conf.so 00:19:15.854 CC lib/env_dpdk/pci_ioat.o 00:19:15.854 CC lib/env_dpdk/pci_virtio.o 00:19:15.854 LIB libspdk_idxd.a 00:19:15.854 LIB libspdk_json.a 00:19:15.854 SO libspdk_idxd.so.12.1 00:19:15.854 SO libspdk_json.so.6.0 00:19:15.854 SYMLINK libspdk_idxd.so 00:19:15.854 CC lib/env_dpdk/pci_vmd.o 00:19:15.854 SYMLINK libspdk_json.so 00:19:15.854 CC lib/env_dpdk/pci_idxd.o 00:19:15.854 CC lib/env_dpdk/pci_event.o 00:19:15.854 CC lib/env_dpdk/sigbus_handler.o 00:19:15.854 CC lib/env_dpdk/pci_dpdk.o 00:19:15.854 CC lib/jsonrpc/jsonrpc_server.o 00:19:15.854 CC lib/rdma_provider/common.o 00:19:15.854 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:19:15.854 CC lib/rdma_provider/rdma_provider_verbs.o 00:19:15.854 CC lib/env_dpdk/pci_dpdk_2207.o 00:19:15.854 CC lib/env_dpdk/pci_dpdk_2211.o 00:19:15.854 CC lib/jsonrpc/jsonrpc_client.o 00:19:15.854 LIB libspdk_vmd.a 00:19:15.854 SO libspdk_vmd.so.6.0 00:19:15.854 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:19:15.854 SYMLINK libspdk_vmd.so 00:19:15.854 LIB libspdk_rdma_provider.a 00:19:15.854 SO libspdk_rdma_provider.so.7.0 00:19:15.854 SYMLINK libspdk_rdma_provider.so 00:19:15.854 LIB libspdk_jsonrpc.a 00:19:15.854 LIB libspdk_env_dpdk.a 00:19:15.854 SO libspdk_jsonrpc.so.6.0 00:19:15.854 SYMLINK libspdk_jsonrpc.so 00:19:15.854 SO libspdk_env_dpdk.so.15.1 00:19:16.112 SYMLINK libspdk_env_dpdk.so 00:19:16.112 CC lib/rpc/rpc.o 00:19:16.370 LIB libspdk_rpc.a 00:19:16.370 SO libspdk_rpc.so.6.0 00:19:16.627 SYMLINK libspdk_rpc.so 00:19:16.884 CC lib/notify/notify_rpc.o 00:19:16.884 CC lib/notify/notify.o 00:19:16.884 CC lib/keyring/keyring.o 00:19:16.884 CC lib/keyring/keyring_rpc.o 00:19:16.884 CC lib/trace/trace_flags.o 00:19:16.884 CC lib/trace/trace.o 00:19:16.884 CC lib/trace/trace_rpc.o 00:19:16.884 LIB libspdk_notify.a 00:19:17.142 SO libspdk_notify.so.6.0 00:19:17.142 LIB libspdk_keyring.a 00:19:17.142 LIB libspdk_trace.a 00:19:17.142 SO libspdk_keyring.so.2.0 00:19:17.142 SO libspdk_trace.so.11.0 00:19:17.142 SYMLINK libspdk_notify.so 00:19:17.142 SYMLINK libspdk_keyring.so 00:19:17.142 SYMLINK libspdk_trace.so 00:19:17.400 CC lib/thread/thread.o 00:19:17.400 CC lib/thread/iobuf.o 00:19:17.400 CC lib/sock/sock.o 00:19:17.400 CC lib/sock/sock_rpc.o 00:19:17.965 LIB libspdk_sock.a 00:19:17.965 SO libspdk_sock.so.10.0 00:19:17.965 SYMLINK libspdk_sock.so 00:19:18.223 CC lib/nvme/nvme_ctrlr_cmd.o 00:19:18.223 CC lib/nvme/nvme_ns_cmd.o 00:19:18.223 CC lib/nvme/nvme_fabric.o 00:19:18.223 CC lib/nvme/nvme_ns.o 00:19:18.223 CC lib/nvme/nvme_ctrlr.o 00:19:18.223 CC lib/nvme/nvme.o 00:19:18.223 CC lib/nvme/nvme_pcie_common.o 00:19:18.223 CC lib/nvme/nvme_pcie.o 00:19:18.223 CC lib/nvme/nvme_qpair.o 00:19:19.156 CC lib/nvme/nvme_quirks.o 00:19:19.156 CC lib/nvme/nvme_transport.o 00:19:19.156 CC lib/nvme/nvme_discovery.o 00:19:19.415 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:19:19.672 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:19:19.672 LIB libspdk_thread.a 00:19:19.672 SO libspdk_thread.so.11.0 00:19:19.929 CC lib/nvme/nvme_tcp.o 00:19:19.929 CC lib/nvme/nvme_opal.o 00:19:19.929 CC lib/nvme/nvme_io_msg.o 00:19:19.929 SYMLINK libspdk_thread.so 00:19:19.929 CC lib/nvme/nvme_poll_group.o 00:19:20.184 CC lib/accel/accel.o 00:19:20.184 CC lib/blob/blobstore.o 00:19:20.440 CC lib/init/json_config.o 00:19:20.696 CC lib/blob/request.o 00:19:20.696 CC lib/virtio/virtio.o 00:19:20.696 CC lib/fsdev/fsdev.o 00:19:20.953 CC lib/accel/accel_rpc.o 00:19:20.953 CC lib/init/subsystem.o 00:19:20.953 CC lib/virtio/virtio_vhost_user.o 00:19:20.953 CC lib/init/subsystem_rpc.o 00:19:20.953 CC lib/init/rpc.o 00:19:20.953 CC lib/accel/accel_sw.o 00:19:21.210 CC lib/fsdev/fsdev_io.o 00:19:21.210 CC lib/fsdev/fsdev_rpc.o 00:19:21.210 CC lib/blob/zeroes.o 00:19:21.210 LIB libspdk_init.a 00:19:21.210 CC lib/virtio/virtio_vfio_user.o 00:19:21.467 SO libspdk_init.so.6.0 00:19:21.467 CC lib/nvme/nvme_zns.o 00:19:21.467 CC lib/blob/blob_bs_dev.o 00:19:21.467 CC lib/nvme/nvme_stubs.o 00:19:21.467 CC lib/virtio/virtio_pci.o 00:19:21.467 LIB libspdk_accel.a 00:19:21.467 SYMLINK libspdk_init.so 00:19:21.467 CC lib/nvme/nvme_auth.o 00:19:21.467 SO libspdk_accel.so.16.0 00:19:21.467 CC lib/nvme/nvme_cuse.o 00:19:21.731 SYMLINK libspdk_accel.so 00:19:21.731 CC lib/nvme/nvme_rdma.o 00:19:21.731 LIB libspdk_fsdev.a 00:19:21.731 SO libspdk_fsdev.so.2.0 00:19:21.731 LIB libspdk_virtio.a 00:19:21.988 SO libspdk_virtio.so.7.0 00:19:21.988 CC lib/event/app.o 00:19:21.988 SYMLINK libspdk_fsdev.so 00:19:21.988 CC lib/event/reactor.o 00:19:21.988 SYMLINK libspdk_virtio.so 00:19:21.988 CC lib/bdev/bdev.o 00:19:21.988 CC lib/event/log_rpc.o 00:19:21.988 CC lib/event/app_rpc.o 00:19:22.245 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:19:22.245 CC lib/event/scheduler_static.o 00:19:22.501 CC lib/bdev/bdev_rpc.o 00:19:22.501 CC lib/bdev/bdev_zone.o 00:19:22.501 CC lib/bdev/part.o 00:19:22.757 LIB libspdk_event.a 00:19:22.757 CC lib/bdev/scsi_nvme.o 00:19:22.757 SO libspdk_event.so.14.0 00:19:22.757 SYMLINK libspdk_event.so 00:19:23.014 LIB libspdk_fuse_dispatcher.a 00:19:23.014 SO libspdk_fuse_dispatcher.so.1.0 00:19:23.014 SYMLINK libspdk_fuse_dispatcher.so 00:19:23.271 LIB libspdk_nvme.a 00:19:23.271 SO libspdk_nvme.so.15.0 00:19:23.835 SYMLINK libspdk_nvme.so 00:19:23.835 LIB libspdk_blob.a 00:19:24.092 SO libspdk_blob.so.12.0 00:19:24.092 SYMLINK libspdk_blob.so 00:19:24.349 CC lib/lvol/lvol.o 00:19:24.349 CC lib/blobfs/blobfs.o 00:19:24.349 CC lib/blobfs/tree.o 00:19:25.281 LIB libspdk_bdev.a 00:19:25.281 SO libspdk_bdev.so.17.0 00:19:25.539 SYMLINK libspdk_bdev.so 00:19:25.539 LIB libspdk_blobfs.a 00:19:25.539 LIB libspdk_lvol.a 00:19:25.539 SO libspdk_lvol.so.11.0 00:19:25.539 SO libspdk_blobfs.so.11.0 00:19:25.539 SYMLINK libspdk_lvol.so 00:19:25.539 SYMLINK libspdk_blobfs.so 00:19:25.797 CC lib/ublk/ublk.o 00:19:25.797 CC lib/ublk/ublk_rpc.o 00:19:25.797 CC lib/scsi/dev.o 00:19:25.797 CC lib/scsi/lun.o 00:19:25.797 CC lib/nvmf/ctrlr_discovery.o 00:19:25.797 CC lib/nbd/nbd.o 00:19:25.797 CC lib/scsi/port.o 00:19:25.797 CC lib/nbd/nbd_rpc.o 00:19:25.797 CC lib/nvmf/ctrlr.o 00:19:25.797 CC lib/ftl/ftl_core.o 00:19:26.055 CC lib/ftl/ftl_init.o 00:19:26.055 CC lib/ftl/ftl_layout.o 00:19:26.055 CC lib/ftl/ftl_debug.o 00:19:26.055 CC lib/ftl/ftl_io.o 00:19:26.055 CC lib/scsi/scsi.o 00:19:26.312 CC lib/scsi/scsi_bdev.o 00:19:26.312 CC lib/nvmf/ctrlr_bdev.o 00:19:26.312 CC lib/nvmf/subsystem.o 00:19:26.312 CC lib/nvmf/nvmf.o 00:19:26.312 LIB libspdk_nbd.a 00:19:26.312 CC lib/ftl/ftl_sb.o 00:19:26.312 CC lib/ftl/ftl_l2p.o 00:19:26.570 SO libspdk_nbd.so.7.0 00:19:26.570 LIB libspdk_ublk.a 00:19:26.570 CC lib/nvmf/nvmf_rpc.o 00:19:26.570 SYMLINK libspdk_nbd.so 00:19:26.570 CC lib/nvmf/transport.o 00:19:26.570 SO libspdk_ublk.so.3.0 00:19:26.570 CC lib/nvmf/tcp.o 00:19:26.570 CC lib/ftl/ftl_l2p_flat.o 00:19:26.828 SYMLINK libspdk_ublk.so 00:19:26.828 CC lib/nvmf/stubs.o 00:19:26.828 CC lib/scsi/scsi_pr.o 00:19:26.828 CC lib/ftl/ftl_nv_cache.o 00:19:27.086 CC lib/nvmf/mdns_server.o 00:19:27.086 CC lib/scsi/scsi_rpc.o 00:19:27.345 CC lib/nvmf/rdma.o 00:19:27.345 CC lib/scsi/task.o 00:19:27.345 CC lib/nvmf/auth.o 00:19:27.345 CC lib/ftl/ftl_band.o 00:19:27.664 CC lib/ftl/ftl_band_ops.o 00:19:27.664 LIB libspdk_scsi.a 00:19:27.664 CC lib/ftl/ftl_writer.o 00:19:27.664 SO libspdk_scsi.so.9.0 00:19:27.664 CC lib/ftl/ftl_rq.o 00:19:27.664 SYMLINK libspdk_scsi.so 00:19:27.664 CC lib/ftl/ftl_reloc.o 00:19:27.935 CC lib/ftl/ftl_l2p_cache.o 00:19:27.935 CC lib/ftl/ftl_p2l.o 00:19:27.935 CC lib/ftl/ftl_p2l_log.o 00:19:27.935 CC lib/ftl/mngt/ftl_mngt.o 00:19:27.935 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:19:28.194 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:19:28.194 CC lib/ftl/mngt/ftl_mngt_startup.o 00:19:28.194 CC lib/ftl/mngt/ftl_mngt_md.o 00:19:28.194 CC lib/ftl/mngt/ftl_mngt_misc.o 00:19:28.194 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:19:28.453 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:19:28.453 CC lib/ftl/mngt/ftl_mngt_band.o 00:19:28.453 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:19:28.453 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:19:28.453 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:19:28.453 CC lib/iscsi/conn.o 00:19:28.453 CC lib/vhost/vhost.o 00:19:28.453 CC lib/iscsi/init_grp.o 00:19:28.453 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:19:28.453 CC lib/iscsi/iscsi.o 00:19:28.711 CC lib/iscsi/param.o 00:19:28.711 CC lib/iscsi/portal_grp.o 00:19:28.711 CC lib/vhost/vhost_rpc.o 00:19:28.711 CC lib/vhost/vhost_scsi.o 00:19:28.711 CC lib/iscsi/tgt_node.o 00:19:28.970 CC lib/ftl/utils/ftl_conf.o 00:19:28.970 CC lib/iscsi/iscsi_subsystem.o 00:19:29.228 CC lib/iscsi/iscsi_rpc.o 00:19:29.228 CC lib/iscsi/task.o 00:19:29.228 CC lib/ftl/utils/ftl_md.o 00:19:29.228 CC lib/vhost/vhost_blk.o 00:19:29.486 CC lib/ftl/utils/ftl_mempool.o 00:19:29.486 CC lib/vhost/rte_vhost_user.o 00:19:29.486 LIB libspdk_nvmf.a 00:19:29.486 CC lib/ftl/utils/ftl_bitmap.o 00:19:29.486 CC lib/ftl/utils/ftl_property.o 00:19:29.486 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:19:29.486 SO libspdk_nvmf.so.20.0 00:19:29.486 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:19:29.745 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:19:29.745 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:19:29.745 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:19:29.745 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:19:29.745 SYMLINK libspdk_nvmf.so 00:19:29.745 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:19:29.745 CC lib/ftl/upgrade/ftl_sb_v3.o 00:19:29.745 CC lib/ftl/upgrade/ftl_sb_v5.o 00:19:30.004 CC lib/ftl/nvc/ftl_nvc_dev.o 00:19:30.004 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:19:30.004 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:19:30.004 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:19:30.004 LIB libspdk_iscsi.a 00:19:30.004 CC lib/ftl/base/ftl_base_dev.o 00:19:30.004 CC lib/ftl/base/ftl_base_bdev.o 00:19:30.004 CC lib/ftl/ftl_trace.o 00:19:30.262 SO libspdk_iscsi.so.8.0 00:19:30.262 SYMLINK libspdk_iscsi.so 00:19:30.262 LIB libspdk_ftl.a 00:19:30.519 LIB libspdk_vhost.a 00:19:30.519 SO libspdk_vhost.so.8.0 00:19:30.519 SO libspdk_ftl.so.9.0 00:19:30.776 SYMLINK libspdk_vhost.so 00:19:31.034 SYMLINK libspdk_ftl.so 00:19:31.293 CC module/env_dpdk/env_dpdk_rpc.o 00:19:31.551 CC module/scheduler/dynamic/scheduler_dynamic.o 00:19:31.551 CC module/sock/posix/posix.o 00:19:31.551 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:19:31.551 CC module/scheduler/gscheduler/gscheduler.o 00:19:31.551 CC module/accel/error/accel_error.o 00:19:31.551 CC module/keyring/file/keyring.o 00:19:31.551 CC module/keyring/linux/keyring.o 00:19:31.551 CC module/fsdev/aio/fsdev_aio.o 00:19:31.551 CC module/blob/bdev/blob_bdev.o 00:19:31.551 LIB libspdk_env_dpdk_rpc.a 00:19:31.551 SO libspdk_env_dpdk_rpc.so.6.0 00:19:31.551 SYMLINK libspdk_env_dpdk_rpc.so 00:19:31.551 CC module/keyring/linux/keyring_rpc.o 00:19:31.551 CC module/fsdev/aio/fsdev_aio_rpc.o 00:19:31.551 LIB libspdk_scheduler_gscheduler.a 00:19:31.551 CC module/keyring/file/keyring_rpc.o 00:19:31.551 SO libspdk_scheduler_gscheduler.so.4.0 00:19:31.551 LIB libspdk_scheduler_dynamic.a 00:19:31.810 LIB libspdk_scheduler_dpdk_governor.a 00:19:31.810 SYMLINK libspdk_scheduler_gscheduler.so 00:19:31.810 CC module/accel/error/accel_error_rpc.o 00:19:31.810 SO libspdk_scheduler_dynamic.so.4.0 00:19:31.810 SO libspdk_scheduler_dpdk_governor.so.4.0 00:19:31.810 CC module/fsdev/aio/linux_aio_mgr.o 00:19:31.810 LIB libspdk_keyring_linux.a 00:19:31.810 SYMLINK libspdk_scheduler_dpdk_governor.so 00:19:31.810 SYMLINK libspdk_scheduler_dynamic.so 00:19:31.810 LIB libspdk_keyring_file.a 00:19:31.810 SO libspdk_keyring_linux.so.1.0 00:19:31.810 SO libspdk_keyring_file.so.2.0 00:19:31.810 LIB libspdk_blob_bdev.a 00:19:31.810 SO libspdk_blob_bdev.so.12.0 00:19:32.067 SYMLINK libspdk_keyring_linux.so 00:19:32.067 LIB libspdk_accel_error.a 00:19:32.067 SYMLINK libspdk_keyring_file.so 00:19:32.067 SYMLINK libspdk_blob_bdev.so 00:19:32.067 SO libspdk_accel_error.so.2.0 00:19:32.067 CC module/accel/dsa/accel_dsa_rpc.o 00:19:32.067 CC module/accel/dsa/accel_dsa.o 00:19:32.067 SYMLINK libspdk_accel_error.so 00:19:32.067 CC module/accel/ioat/accel_ioat_rpc.o 00:19:32.067 CC module/accel/ioat/accel_ioat.o 00:19:32.067 CC module/accel/iaa/accel_iaa.o 00:19:32.067 CC module/accel/iaa/accel_iaa_rpc.o 00:19:32.325 LIB libspdk_fsdev_aio.a 00:19:32.325 CC module/bdev/delay/vbdev_delay.o 00:19:32.325 LIB libspdk_accel_ioat.a 00:19:32.325 CC module/blobfs/bdev/blobfs_bdev.o 00:19:32.325 SO libspdk_fsdev_aio.so.1.0 00:19:32.325 SO libspdk_accel_ioat.so.6.0 00:19:32.325 LIB libspdk_accel_dsa.a 00:19:32.325 CC module/bdev/error/vbdev_error.o 00:19:32.582 SO libspdk_accel_dsa.so.5.0 00:19:32.582 LIB libspdk_sock_posix.a 00:19:32.582 SYMLINK libspdk_fsdev_aio.so 00:19:32.582 SYMLINK libspdk_accel_ioat.so 00:19:32.582 CC module/bdev/error/vbdev_error_rpc.o 00:19:32.582 CC module/bdev/delay/vbdev_delay_rpc.o 00:19:32.582 CC module/bdev/gpt/gpt.o 00:19:32.582 LIB libspdk_accel_iaa.a 00:19:32.582 SO libspdk_sock_posix.so.6.0 00:19:32.582 SO libspdk_accel_iaa.so.3.0 00:19:32.582 SYMLINK libspdk_accel_dsa.so 00:19:32.582 CC module/bdev/lvol/vbdev_lvol.o 00:19:32.582 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:19:32.582 SYMLINK libspdk_accel_iaa.so 00:19:32.582 SYMLINK libspdk_sock_posix.so 00:19:32.582 CC module/bdev/gpt/vbdev_gpt.o 00:19:32.582 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:19:32.840 LIB libspdk_bdev_error.a 00:19:32.840 SO libspdk_bdev_error.so.6.0 00:19:32.840 CC module/bdev/malloc/bdev_malloc.o 00:19:32.840 LIB libspdk_blobfs_bdev.a 00:19:32.840 SO libspdk_blobfs_bdev.so.6.0 00:19:32.840 SYMLINK libspdk_bdev_error.so 00:19:32.840 CC module/bdev/null/bdev_null.o 00:19:32.840 SYMLINK libspdk_blobfs_bdev.so 00:19:32.840 LIB libspdk_bdev_gpt.a 00:19:32.840 LIB libspdk_bdev_delay.a 00:19:33.098 CC module/bdev/null/bdev_null_rpc.o 00:19:33.098 CC module/bdev/nvme/bdev_nvme.o 00:19:33.098 CC module/bdev/passthru/vbdev_passthru.o 00:19:33.098 SO libspdk_bdev_gpt.so.6.0 00:19:33.098 SO libspdk_bdev_delay.so.6.0 00:19:33.098 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:19:33.098 SYMLINK libspdk_bdev_gpt.so 00:19:33.098 CC module/bdev/raid/bdev_raid.o 00:19:33.099 SYMLINK libspdk_bdev_delay.so 00:19:33.099 CC module/bdev/nvme/bdev_nvme_rpc.o 00:19:33.099 LIB libspdk_bdev_lvol.a 00:19:33.099 LIB libspdk_bdev_null.a 00:19:33.099 SO libspdk_bdev_lvol.so.6.0 00:19:33.099 SO libspdk_bdev_null.so.6.0 00:19:33.356 CC module/bdev/malloc/bdev_malloc_rpc.o 00:19:33.356 SYMLINK libspdk_bdev_lvol.so 00:19:33.356 CC module/bdev/split/vbdev_split.o 00:19:33.356 SYMLINK libspdk_bdev_null.so 00:19:33.356 LIB libspdk_bdev_passthru.a 00:19:33.356 CC module/bdev/zone_block/vbdev_zone_block.o 00:19:33.356 SO libspdk_bdev_passthru.so.6.0 00:19:33.356 LIB libspdk_bdev_malloc.a 00:19:33.665 SYMLINK libspdk_bdev_passthru.so 00:19:33.666 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:19:33.666 SO libspdk_bdev_malloc.so.6.0 00:19:33.666 CC module/bdev/aio/bdev_aio.o 00:19:33.666 CC module/bdev/ftl/bdev_ftl.o 00:19:33.666 CC module/bdev/iscsi/bdev_iscsi.o 00:19:33.666 SYMLINK libspdk_bdev_malloc.so 00:19:33.666 CC module/bdev/ftl/bdev_ftl_rpc.o 00:19:33.666 CC module/bdev/split/vbdev_split_rpc.o 00:19:33.666 CC module/bdev/raid/bdev_raid_rpc.o 00:19:33.926 LIB libspdk_bdev_zone_block.a 00:19:33.926 CC module/bdev/raid/bdev_raid_sb.o 00:19:33.926 SO libspdk_bdev_zone_block.so.6.0 00:19:33.926 LIB libspdk_bdev_ftl.a 00:19:33.926 LIB libspdk_bdev_split.a 00:19:33.926 SO libspdk_bdev_ftl.so.6.0 00:19:33.926 SO libspdk_bdev_split.so.6.0 00:19:33.926 SYMLINK libspdk_bdev_zone_block.so 00:19:33.926 CC module/bdev/raid/raid0.o 00:19:33.926 CC module/bdev/aio/bdev_aio_rpc.o 00:19:33.926 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:19:33.926 SYMLINK libspdk_bdev_ftl.so 00:19:33.926 CC module/bdev/nvme/nvme_rpc.o 00:19:33.926 CC module/bdev/raid/raid1.o 00:19:34.184 SYMLINK libspdk_bdev_split.so 00:19:34.184 CC module/bdev/raid/concat.o 00:19:34.184 CC module/bdev/nvme/bdev_mdns_client.o 00:19:34.184 LIB libspdk_bdev_aio.a 00:19:34.184 LIB libspdk_bdev_iscsi.a 00:19:34.184 SO libspdk_bdev_aio.so.6.0 00:19:34.184 SO libspdk_bdev_iscsi.so.6.0 00:19:34.448 CC module/bdev/virtio/bdev_virtio_scsi.o 00:19:34.448 CC module/bdev/virtio/bdev_virtio_blk.o 00:19:34.448 SYMLINK libspdk_bdev_aio.so 00:19:34.448 CC module/bdev/virtio/bdev_virtio_rpc.o 00:19:34.448 CC module/bdev/nvme/vbdev_opal.o 00:19:34.448 SYMLINK libspdk_bdev_iscsi.so 00:19:34.448 CC module/bdev/nvme/vbdev_opal_rpc.o 00:19:34.448 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:19:35.014 LIB libspdk_bdev_raid.a 00:19:35.014 SO libspdk_bdev_raid.so.6.0 00:19:35.014 LIB libspdk_bdev_virtio.a 00:19:35.014 SO libspdk_bdev_virtio.so.6.0 00:19:35.014 SYMLINK libspdk_bdev_raid.so 00:19:35.014 SYMLINK libspdk_bdev_virtio.so 00:19:36.386 LIB libspdk_bdev_nvme.a 00:19:36.386 SO libspdk_bdev_nvme.so.7.1 00:19:36.673 SYMLINK libspdk_bdev_nvme.so 00:19:37.241 CC module/event/subsystems/iobuf/iobuf.o 00:19:37.241 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:19:37.241 CC module/event/subsystems/sock/sock.o 00:19:37.241 CC module/event/subsystems/keyring/keyring.o 00:19:37.241 CC module/event/subsystems/vmd/vmd.o 00:19:37.241 CC module/event/subsystems/vmd/vmd_rpc.o 00:19:37.241 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:19:37.241 CC module/event/subsystems/fsdev/fsdev.o 00:19:37.241 CC module/event/subsystems/scheduler/scheduler.o 00:19:37.241 LIB libspdk_event_vhost_blk.a 00:19:37.500 LIB libspdk_event_sock.a 00:19:37.500 LIB libspdk_event_fsdev.a 00:19:37.500 LIB libspdk_event_keyring.a 00:19:37.500 LIB libspdk_event_vmd.a 00:19:37.500 SO libspdk_event_vhost_blk.so.3.0 00:19:37.500 SO libspdk_event_sock.so.5.0 00:19:37.500 LIB libspdk_event_scheduler.a 00:19:37.500 SO libspdk_event_fsdev.so.1.0 00:19:37.500 LIB libspdk_event_iobuf.a 00:19:37.500 SO libspdk_event_keyring.so.1.0 00:19:37.500 SO libspdk_event_vmd.so.6.0 00:19:37.500 SO libspdk_event_scheduler.so.4.0 00:19:37.500 SYMLINK libspdk_event_vhost_blk.so 00:19:37.500 SO libspdk_event_iobuf.so.3.0 00:19:37.500 SYMLINK libspdk_event_sock.so 00:19:37.500 SYMLINK libspdk_event_fsdev.so 00:19:37.500 SYMLINK libspdk_event_vmd.so 00:19:37.500 SYMLINK libspdk_event_keyring.so 00:19:37.500 SYMLINK libspdk_event_scheduler.so 00:19:37.500 SYMLINK libspdk_event_iobuf.so 00:19:37.757 CC module/event/subsystems/accel/accel.o 00:19:38.015 LIB libspdk_event_accel.a 00:19:38.015 SO libspdk_event_accel.so.6.0 00:19:38.015 SYMLINK libspdk_event_accel.so 00:19:38.603 CC module/event/subsystems/bdev/bdev.o 00:19:38.603 LIB libspdk_event_bdev.a 00:19:38.603 SO libspdk_event_bdev.so.6.0 00:19:38.859 SYMLINK libspdk_event_bdev.so 00:19:39.117 CC module/event/subsystems/ublk/ublk.o 00:19:39.117 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:19:39.117 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:19:39.117 CC module/event/subsystems/scsi/scsi.o 00:19:39.117 CC module/event/subsystems/nbd/nbd.o 00:19:39.374 LIB libspdk_event_ublk.a 00:19:39.374 LIB libspdk_event_scsi.a 00:19:39.374 SO libspdk_event_ublk.so.3.0 00:19:39.374 SO libspdk_event_scsi.so.6.0 00:19:39.374 LIB libspdk_event_nbd.a 00:19:39.374 SYMLINK libspdk_event_ublk.so 00:19:39.374 SYMLINK libspdk_event_scsi.so 00:19:39.374 SO libspdk_event_nbd.so.6.0 00:19:39.374 LIB libspdk_event_nvmf.a 00:19:39.374 SO libspdk_event_nvmf.so.6.0 00:19:39.374 SYMLINK libspdk_event_nbd.so 00:19:39.631 SYMLINK libspdk_event_nvmf.so 00:19:39.631 CC module/event/subsystems/iscsi/iscsi.o 00:19:39.631 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:19:39.889 LIB libspdk_event_iscsi.a 00:19:39.889 LIB libspdk_event_vhost_scsi.a 00:19:39.889 SO libspdk_event_iscsi.so.6.0 00:19:39.889 SO libspdk_event_vhost_scsi.so.3.0 00:19:39.889 SYMLINK libspdk_event_iscsi.so 00:19:39.889 SYMLINK libspdk_event_vhost_scsi.so 00:19:40.147 SO libspdk.so.6.0 00:19:40.147 SYMLINK libspdk.so 00:19:40.405 CC app/spdk_nvme_perf/perf.o 00:19:40.405 CC app/spdk_lspci/spdk_lspci.o 00:19:40.405 CC app/trace_record/trace_record.o 00:19:40.405 CXX app/trace/trace.o 00:19:40.405 CC app/spdk_nvme_identify/identify.o 00:19:40.405 CC app/nvmf_tgt/nvmf_main.o 00:19:40.405 CC app/iscsi_tgt/iscsi_tgt.o 00:19:40.405 CC app/spdk_tgt/spdk_tgt.o 00:19:40.405 CC examples/util/zipf/zipf.o 00:19:40.405 CC test/thread/poller_perf/poller_perf.o 00:19:40.666 LINK spdk_lspci 00:19:40.666 LINK spdk_trace_record 00:19:40.666 LINK nvmf_tgt 00:19:40.666 LINK spdk_tgt 00:19:40.666 LINK poller_perf 00:19:40.666 LINK zipf 00:19:40.666 LINK spdk_trace 00:19:40.666 LINK iscsi_tgt 00:19:40.922 CC examples/ioat/perf/perf.o 00:19:40.922 CC examples/ioat/verify/verify.o 00:19:41.180 CC app/spdk_nvme_discover/discovery_aer.o 00:19:41.180 CC app/spdk_top/spdk_top.o 00:19:41.180 CC examples/interrupt_tgt/interrupt_tgt.o 00:19:41.180 LINK spdk_nvme_perf 00:19:41.180 LINK ioat_perf 00:19:41.180 LINK verify 00:19:41.180 CC test/dma/test_dma/test_dma.o 00:19:41.180 LINK spdk_nvme_identify 00:19:41.437 LINK spdk_nvme_discover 00:19:41.437 LINK interrupt_tgt 00:19:41.437 CC examples/thread/thread/thread_ex.o 00:19:41.693 CC app/vhost/vhost.o 00:19:41.693 CC examples/sock/hello_world/hello_sock.o 00:19:41.693 CC examples/vmd/lsvmd/lsvmd.o 00:19:41.693 CC examples/idxd/perf/perf.o 00:19:41.693 LINK thread 00:19:41.694 CC app/spdk_dd/spdk_dd.o 00:19:41.950 LINK test_dma 00:19:41.950 LINK lsvmd 00:19:41.950 CC test/app/bdev_svc/bdev_svc.o 00:19:41.950 LINK hello_sock 00:19:41.950 LINK vhost 00:19:41.950 LINK spdk_top 00:19:41.950 LINK idxd_perf 00:19:42.209 LINK bdev_svc 00:19:42.209 LINK spdk_dd 00:19:42.209 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:19:42.209 TEST_HEADER include/spdk/accel.h 00:19:42.209 CC examples/vmd/led/led.o 00:19:42.209 TEST_HEADER include/spdk/accel_module.h 00:19:42.209 TEST_HEADER include/spdk/assert.h 00:19:42.209 TEST_HEADER include/spdk/barrier.h 00:19:42.209 TEST_HEADER include/spdk/base64.h 00:19:42.209 TEST_HEADER include/spdk/bdev.h 00:19:42.209 TEST_HEADER include/spdk/bdev_module.h 00:19:42.209 TEST_HEADER include/spdk/bdev_zone.h 00:19:42.209 TEST_HEADER include/spdk/bit_array.h 00:19:42.209 TEST_HEADER include/spdk/bit_pool.h 00:19:42.209 TEST_HEADER include/spdk/blob_bdev.h 00:19:42.209 TEST_HEADER include/spdk/blobfs_bdev.h 00:19:42.209 TEST_HEADER include/spdk/blobfs.h 00:19:42.209 TEST_HEADER include/spdk/blob.h 00:19:42.209 TEST_HEADER include/spdk/conf.h 00:19:42.468 TEST_HEADER include/spdk/config.h 00:19:42.468 TEST_HEADER include/spdk/cpuset.h 00:19:42.468 TEST_HEADER include/spdk/crc16.h 00:19:42.468 TEST_HEADER include/spdk/crc32.h 00:19:42.468 TEST_HEADER include/spdk/crc64.h 00:19:42.468 TEST_HEADER include/spdk/dif.h 00:19:42.468 TEST_HEADER include/spdk/dma.h 00:19:42.468 TEST_HEADER include/spdk/endian.h 00:19:42.468 TEST_HEADER include/spdk/env_dpdk.h 00:19:42.468 TEST_HEADER include/spdk/env.h 00:19:42.468 TEST_HEADER include/spdk/event.h 00:19:42.468 TEST_HEADER include/spdk/fd_group.h 00:19:42.468 TEST_HEADER include/spdk/fd.h 00:19:42.468 TEST_HEADER include/spdk/file.h 00:19:42.468 TEST_HEADER include/spdk/fsdev.h 00:19:42.468 TEST_HEADER include/spdk/fsdev_module.h 00:19:42.468 TEST_HEADER include/spdk/ftl.h 00:19:42.468 TEST_HEADER include/spdk/fuse_dispatcher.h 00:19:42.468 TEST_HEADER include/spdk/gpt_spec.h 00:19:42.468 TEST_HEADER include/spdk/hexlify.h 00:19:42.468 TEST_HEADER include/spdk/histogram_data.h 00:19:42.468 TEST_HEADER include/spdk/idxd.h 00:19:42.468 TEST_HEADER include/spdk/idxd_spec.h 00:19:42.468 TEST_HEADER include/spdk/init.h 00:19:42.468 TEST_HEADER include/spdk/ioat.h 00:19:42.468 TEST_HEADER include/spdk/ioat_spec.h 00:19:42.468 TEST_HEADER include/spdk/iscsi_spec.h 00:19:42.468 TEST_HEADER include/spdk/json.h 00:19:42.468 TEST_HEADER include/spdk/jsonrpc.h 00:19:42.468 TEST_HEADER include/spdk/keyring.h 00:19:42.468 TEST_HEADER include/spdk/keyring_module.h 00:19:42.468 TEST_HEADER include/spdk/likely.h 00:19:42.468 TEST_HEADER include/spdk/log.h 00:19:42.468 TEST_HEADER include/spdk/lvol.h 00:19:42.468 TEST_HEADER include/spdk/md5.h 00:19:42.468 TEST_HEADER include/spdk/memory.h 00:19:42.468 TEST_HEADER include/spdk/mmio.h 00:19:42.468 TEST_HEADER include/spdk/nbd.h 00:19:42.468 TEST_HEADER include/spdk/net.h 00:19:42.468 TEST_HEADER include/spdk/notify.h 00:19:42.468 TEST_HEADER include/spdk/nvme.h 00:19:42.468 TEST_HEADER include/spdk/nvme_intel.h 00:19:42.468 CC app/fio/nvme/fio_plugin.o 00:19:42.468 TEST_HEADER include/spdk/nvme_ocssd.h 00:19:42.468 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:19:42.468 TEST_HEADER include/spdk/nvme_spec.h 00:19:42.468 TEST_HEADER include/spdk/nvme_zns.h 00:19:42.468 TEST_HEADER include/spdk/nvmf_cmd.h 00:19:42.468 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:19:42.468 TEST_HEADER include/spdk/nvmf.h 00:19:42.468 LINK led 00:19:42.468 TEST_HEADER include/spdk/nvmf_spec.h 00:19:42.468 TEST_HEADER include/spdk/nvmf_transport.h 00:19:42.468 TEST_HEADER include/spdk/opal.h 00:19:42.468 TEST_HEADER include/spdk/opal_spec.h 00:19:42.468 TEST_HEADER include/spdk/pci_ids.h 00:19:42.468 TEST_HEADER include/spdk/pipe.h 00:19:42.468 TEST_HEADER include/spdk/queue.h 00:19:42.468 TEST_HEADER include/spdk/reduce.h 00:19:42.468 TEST_HEADER include/spdk/rpc.h 00:19:42.468 TEST_HEADER include/spdk/scheduler.h 00:19:42.468 TEST_HEADER include/spdk/scsi.h 00:19:42.468 TEST_HEADER include/spdk/scsi_spec.h 00:19:42.468 TEST_HEADER include/spdk/sock.h 00:19:42.468 TEST_HEADER include/spdk/stdinc.h 00:19:42.468 TEST_HEADER include/spdk/string.h 00:19:42.468 TEST_HEADER include/spdk/thread.h 00:19:42.468 TEST_HEADER include/spdk/trace.h 00:19:42.468 TEST_HEADER include/spdk/trace_parser.h 00:19:42.468 TEST_HEADER include/spdk/tree.h 00:19:42.468 TEST_HEADER include/spdk/ublk.h 00:19:42.468 TEST_HEADER include/spdk/util.h 00:19:42.468 CC examples/accel/perf/accel_perf.o 00:19:42.468 CC test/event/event_perf/event_perf.o 00:19:42.468 TEST_HEADER include/spdk/uuid.h 00:19:42.468 TEST_HEADER include/spdk/version.h 00:19:42.468 TEST_HEADER include/spdk/vfio_user_pci.h 00:19:42.726 TEST_HEADER include/spdk/vfio_user_spec.h 00:19:42.726 TEST_HEADER include/spdk/vhost.h 00:19:42.726 TEST_HEADER include/spdk/vmd.h 00:19:42.726 TEST_HEADER include/spdk/xor.h 00:19:42.726 TEST_HEADER include/spdk/zipf.h 00:19:42.726 CXX test/cpp_headers/accel.o 00:19:42.726 CC test/env/vtophys/vtophys.o 00:19:42.726 CC test/env/mem_callbacks/mem_callbacks.o 00:19:42.726 LINK nvme_fuzz 00:19:42.726 CC app/fio/bdev/fio_plugin.o 00:19:42.726 LINK event_perf 00:19:42.726 CXX test/cpp_headers/accel_module.o 00:19:43.011 LINK vtophys 00:19:43.011 CC test/event/reactor/reactor.o 00:19:43.011 CXX test/cpp_headers/assert.o 00:19:43.296 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:19:43.296 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:19:43.296 LINK accel_perf 00:19:43.296 LINK reactor 00:19:43.296 CC test/event/reactor_perf/reactor_perf.o 00:19:43.296 LINK spdk_nvme 00:19:43.296 CXX test/cpp_headers/barrier.o 00:19:43.296 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:19:43.296 CXX test/cpp_headers/base64.o 00:19:43.554 LINK reactor_perf 00:19:43.554 LINK mem_callbacks 00:19:43.554 LINK spdk_bdev 00:19:43.554 CC test/event/app_repeat/app_repeat.o 00:19:43.554 CXX test/cpp_headers/bdev.o 00:19:43.554 CC test/event/scheduler/scheduler.o 00:19:43.554 CC test/app/histogram_perf/histogram_perf.o 00:19:43.813 LINK app_repeat 00:19:43.813 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:19:43.813 CC test/rpc_client/rpc_client_test.o 00:19:43.813 LINK vhost_fuzz 00:19:43.813 LINK histogram_perf 00:19:44.072 CC test/nvme/aer/aer.o 00:19:44.072 CXX test/cpp_headers/bdev_module.o 00:19:44.072 LINK scheduler 00:19:44.072 LINK env_dpdk_post_init 00:19:44.072 CXX test/cpp_headers/bdev_zone.o 00:19:44.333 LINK rpc_client_test 00:19:44.333 CXX test/cpp_headers/bit_array.o 00:19:44.593 LINK aer 00:19:44.593 CC test/env/memory/memory_ut.o 00:19:44.593 CXX test/cpp_headers/bit_pool.o 00:19:44.593 CC test/accel/dif/dif.o 00:19:44.852 CC test/nvme/reset/reset.o 00:19:44.852 CC test/blobfs/mkfs/mkfs.o 00:19:44.852 CC test/lvol/esnap/esnap.o 00:19:44.852 CXX test/cpp_headers/blob_bdev.o 00:19:44.852 CC test/nvme/sgl/sgl.o 00:19:44.852 CC test/env/pci/pci_ut.o 00:19:45.110 LINK mkfs 00:19:45.110 LINK reset 00:19:45.110 CXX test/cpp_headers/blobfs_bdev.o 00:19:45.110 LINK sgl 00:19:45.369 CXX test/cpp_headers/blobfs.o 00:19:45.369 CXX test/cpp_headers/blob.o 00:19:45.369 LINK pci_ut 00:19:45.628 CC test/nvme/e2edp/nvme_dp.o 00:19:45.628 LINK dif 00:19:45.887 CXX test/cpp_headers/conf.o 00:19:45.887 LINK iscsi_fuzz 00:19:46.145 CXX test/cpp_headers/config.o 00:19:46.145 CC examples/nvme/hello_world/hello_world.o 00:19:46.145 CXX test/cpp_headers/cpuset.o 00:19:46.145 CC examples/blob/hello_world/hello_blob.o 00:19:46.145 LINK nvme_dp 00:19:46.145 CXX test/cpp_headers/crc16.o 00:19:46.404 CC examples/nvme/reconnect/reconnect.o 00:19:46.404 CC test/app/jsoncat/jsoncat.o 00:19:46.404 LINK memory_ut 00:19:46.404 LINK hello_world 00:19:46.404 LINK hello_blob 00:19:46.663 CXX test/cpp_headers/crc32.o 00:19:46.663 LINK jsoncat 00:19:46.663 CC test/nvme/overhead/overhead.o 00:19:46.663 CC examples/fsdev/hello_world/hello_fsdev.o 00:19:46.921 CXX test/cpp_headers/crc64.o 00:19:46.921 LINK reconnect 00:19:46.921 CC test/app/stub/stub.o 00:19:46.921 CC test/nvme/err_injection/err_injection.o 00:19:46.921 LINK overhead 00:19:46.921 CXX test/cpp_headers/dif.o 00:19:47.178 CC examples/blob/cli/blobcli.o 00:19:47.178 CC examples/bdev/hello_world/hello_bdev.o 00:19:47.178 LINK hello_fsdev 00:19:47.178 LINK stub 00:19:47.178 CXX test/cpp_headers/dma.o 00:19:47.437 LINK err_injection 00:19:47.437 CC examples/nvme/nvme_manage/nvme_manage.o 00:19:47.437 CXX test/cpp_headers/endian.o 00:19:47.437 CC examples/nvme/arbitration/arbitration.o 00:19:47.694 LINK hello_bdev 00:19:47.694 CC examples/nvme/hotplug/hotplug.o 00:19:47.694 CC test/nvme/startup/startup.o 00:19:47.694 CC test/nvme/reserve/reserve.o 00:19:47.694 LINK blobcli 00:19:47.694 CXX test/cpp_headers/env_dpdk.o 00:19:47.952 LINK startup 00:19:47.952 LINK nvme_manage 00:19:47.952 CXX test/cpp_headers/env.o 00:19:47.952 LINK reserve 00:19:47.952 LINK hotplug 00:19:47.952 LINK arbitration 00:19:47.952 CC examples/bdev/bdevperf/bdevperf.o 00:19:48.210 CC test/nvme/simple_copy/simple_copy.o 00:19:48.210 CXX test/cpp_headers/event.o 00:19:48.210 CXX test/cpp_headers/fd_group.o 00:19:48.210 CXX test/cpp_headers/fd.o 00:19:48.210 CXX test/cpp_headers/file.o 00:19:48.210 CXX test/cpp_headers/fsdev.o 00:19:48.210 CC examples/nvme/cmb_copy/cmb_copy.o 00:19:48.468 LINK simple_copy 00:19:48.468 CXX test/cpp_headers/fsdev_module.o 00:19:48.468 CC examples/nvme/abort/abort.o 00:19:48.468 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:19:48.468 CXX test/cpp_headers/ftl.o 00:19:48.468 LINK cmb_copy 00:19:48.468 CC test/nvme/connect_stress/connect_stress.o 00:19:48.726 LINK pmr_persistence 00:19:48.726 CC test/nvme/boot_partition/boot_partition.o 00:19:48.726 CXX test/cpp_headers/fuse_dispatcher.o 00:19:48.726 CC test/nvme/compliance/nvme_compliance.o 00:19:48.726 CXX test/cpp_headers/gpt_spec.o 00:19:48.726 LINK connect_stress 00:19:48.984 LINK abort 00:19:48.984 LINK boot_partition 00:19:48.984 CXX test/cpp_headers/hexlify.o 00:19:48.984 LINK bdevperf 00:19:48.984 CXX test/cpp_headers/histogram_data.o 00:19:49.242 LINK nvme_compliance 00:19:49.242 CXX test/cpp_headers/idxd.o 00:19:49.242 CC test/nvme/fused_ordering/fused_ordering.o 00:19:49.242 CC test/nvme/doorbell_aers/doorbell_aers.o 00:19:49.242 CXX test/cpp_headers/idxd_spec.o 00:19:49.242 CC test/nvme/fdp/fdp.o 00:19:49.242 CC test/bdev/bdevio/bdevio.o 00:19:49.242 CXX test/cpp_headers/init.o 00:19:49.242 CXX test/cpp_headers/ioat.o 00:19:49.500 LINK fused_ordering 00:19:49.500 CC test/nvme/cuse/cuse.o 00:19:49.500 LINK doorbell_aers 00:19:49.500 CXX test/cpp_headers/ioat_spec.o 00:19:49.500 CXX test/cpp_headers/iscsi_spec.o 00:19:49.500 CXX test/cpp_headers/json.o 00:19:49.500 LINK fdp 00:19:49.500 CXX test/cpp_headers/jsonrpc.o 00:19:49.500 CXX test/cpp_headers/keyring.o 00:19:49.758 CC examples/nvmf/nvmf/nvmf.o 00:19:49.758 CXX test/cpp_headers/keyring_module.o 00:19:49.758 LINK bdevio 00:19:49.758 CXX test/cpp_headers/likely.o 00:19:49.758 CXX test/cpp_headers/log.o 00:19:49.758 CXX test/cpp_headers/lvol.o 00:19:49.758 CXX test/cpp_headers/md5.o 00:19:49.758 CXX test/cpp_headers/memory.o 00:19:50.019 CXX test/cpp_headers/mmio.o 00:19:50.019 CXX test/cpp_headers/nbd.o 00:19:50.019 CXX test/cpp_headers/net.o 00:19:50.019 CXX test/cpp_headers/notify.o 00:19:50.019 CXX test/cpp_headers/nvme.o 00:19:50.019 CXX test/cpp_headers/nvme_intel.o 00:19:50.019 LINK nvmf 00:19:50.019 CXX test/cpp_headers/nvme_ocssd.o 00:19:50.019 CXX test/cpp_headers/nvme_ocssd_spec.o 00:19:50.019 CXX test/cpp_headers/nvme_spec.o 00:19:50.277 CXX test/cpp_headers/nvme_zns.o 00:19:50.277 CXX test/cpp_headers/nvmf_cmd.o 00:19:50.277 CXX test/cpp_headers/nvmf_fc_spec.o 00:19:50.277 CXX test/cpp_headers/nvmf.o 00:19:50.277 CXX test/cpp_headers/nvmf_spec.o 00:19:50.277 CXX test/cpp_headers/nvmf_transport.o 00:19:50.277 CXX test/cpp_headers/opal.o 00:19:50.277 CXX test/cpp_headers/opal_spec.o 00:19:50.537 CXX test/cpp_headers/pci_ids.o 00:19:50.537 CXX test/cpp_headers/pipe.o 00:19:50.537 CXX test/cpp_headers/queue.o 00:19:50.537 CXX test/cpp_headers/reduce.o 00:19:50.537 CXX test/cpp_headers/rpc.o 00:19:50.537 CXX test/cpp_headers/scheduler.o 00:19:50.537 CXX test/cpp_headers/scsi.o 00:19:50.537 CXX test/cpp_headers/scsi_spec.o 00:19:50.537 CXX test/cpp_headers/sock.o 00:19:50.537 CXX test/cpp_headers/stdinc.o 00:19:50.795 CXX test/cpp_headers/string.o 00:19:50.795 CXX test/cpp_headers/thread.o 00:19:50.795 CXX test/cpp_headers/trace.o 00:19:50.795 CXX test/cpp_headers/trace_parser.o 00:19:50.795 CXX test/cpp_headers/tree.o 00:19:50.795 CXX test/cpp_headers/ublk.o 00:19:50.795 CXX test/cpp_headers/util.o 00:19:50.795 CXX test/cpp_headers/uuid.o 00:19:50.795 CXX test/cpp_headers/version.o 00:19:50.795 CXX test/cpp_headers/vfio_user_pci.o 00:19:50.795 CXX test/cpp_headers/vfio_user_spec.o 00:19:51.052 CXX test/cpp_headers/vhost.o 00:19:51.052 CXX test/cpp_headers/vmd.o 00:19:51.052 LINK cuse 00:19:51.052 CXX test/cpp_headers/xor.o 00:19:51.052 CXX test/cpp_headers/zipf.o 00:19:51.311 LINK esnap 00:19:51.880 00:19:51.880 real 1m45.090s 00:19:51.880 user 9m36.313s 00:19:51.880 sys 2m13.731s 00:19:51.880 11:03:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:19:51.880 ************************************ 00:19:51.880 END TEST make 00:19:51.880 11:03:16 make -- common/autotest_common.sh@10 -- $ set +x 00:19:51.880 ************************************ 00:19:51.880 11:03:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:19:51.880 11:03:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:51.880 11:03:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:51.880 11:03:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:51.880 11:03:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:19:51.880 11:03:16 -- pm/common@44 -- $ pid=5296 00:19:51.880 11:03:16 -- pm/common@50 -- $ kill -TERM 5296 00:19:51.880 11:03:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:51.880 11:03:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:51.880 11:03:16 -- pm/common@44 -- $ pid=5298 00:19:51.880 11:03:16 -- pm/common@50 -- $ kill -TERM 5298 00:19:51.880 11:03:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:19:51.880 11:03:16 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:19:51.880 11:03:16 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:51.880 11:03:16 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:51.880 11:03:16 -- common/autotest_common.sh@1711 -- # lcov --version 00:19:51.880 11:03:16 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:51.880 11:03:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.880 11:03:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.880 11:03:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.880 11:03:16 -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.880 11:03:16 -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.880 11:03:16 -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.880 11:03:16 -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.880 11:03:16 -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.880 11:03:16 -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.880 11:03:16 -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.880 11:03:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.880 11:03:16 -- scripts/common.sh@344 -- # case "$op" in 00:19:51.880 11:03:16 -- scripts/common.sh@345 -- # : 1 00:19:51.880 11:03:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.880 11:03:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.880 11:03:16 -- scripts/common.sh@365 -- # decimal 1 00:19:51.880 11:03:16 -- scripts/common.sh@353 -- # local d=1 00:19:51.880 11:03:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.880 11:03:16 -- scripts/common.sh@355 -- # echo 1 00:19:51.880 11:03:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:19:52.138 11:03:16 -- scripts/common.sh@366 -- # decimal 2 00:19:52.138 11:03:16 -- scripts/common.sh@353 -- # local d=2 00:19:52.138 11:03:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:52.138 11:03:16 -- scripts/common.sh@355 -- # echo 2 00:19:52.138 11:03:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:19:52.138 11:03:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:52.138 11:03:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:52.138 11:03:16 -- scripts/common.sh@368 -- # return 0 00:19:52.138 11:03:16 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:52.138 11:03:16 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:52.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.138 --rc genhtml_branch_coverage=1 00:19:52.138 --rc genhtml_function_coverage=1 00:19:52.138 --rc genhtml_legend=1 00:19:52.138 --rc geninfo_all_blocks=1 00:19:52.138 --rc geninfo_unexecuted_blocks=1 00:19:52.138 00:19:52.138 ' 00:19:52.138 11:03:16 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:52.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.138 --rc genhtml_branch_coverage=1 00:19:52.138 --rc genhtml_function_coverage=1 00:19:52.138 --rc genhtml_legend=1 00:19:52.138 --rc geninfo_all_blocks=1 00:19:52.138 --rc geninfo_unexecuted_blocks=1 00:19:52.138 00:19:52.138 ' 00:19:52.138 11:03:16 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:52.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.138 --rc genhtml_branch_coverage=1 00:19:52.138 --rc genhtml_function_coverage=1 00:19:52.138 --rc genhtml_legend=1 00:19:52.138 --rc geninfo_all_blocks=1 00:19:52.138 --rc geninfo_unexecuted_blocks=1 00:19:52.138 00:19:52.138 ' 00:19:52.138 11:03:16 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:52.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:52.138 --rc genhtml_branch_coverage=1 00:19:52.138 --rc genhtml_function_coverage=1 00:19:52.138 --rc genhtml_legend=1 00:19:52.138 --rc geninfo_all_blocks=1 00:19:52.138 --rc geninfo_unexecuted_blocks=1 00:19:52.138 00:19:52.138 ' 00:19:52.138 11:03:16 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.138 11:03:16 -- nvmf/common.sh@7 -- # uname -s 00:19:52.138 11:03:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.138 11:03:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.138 11:03:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.138 11:03:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.138 11:03:16 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.138 11:03:16 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:52.138 11:03:16 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.138 11:03:16 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:52.138 11:03:16 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:19:52.138 11:03:16 -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:19:52.138 11:03:16 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.138 11:03:16 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:52.138 11:03:16 -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:52.138 11:03:16 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.138 11:03:16 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.138 11:03:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:19:52.138 11:03:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.139 11:03:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.139 11:03:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.139 11:03:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.139 11:03:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.139 11:03:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.139 11:03:16 -- paths/export.sh@5 -- # export PATH 00:19:52.139 11:03:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.139 11:03:16 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:52.139 11:03:16 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:52.139 11:03:16 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:52.139 11:03:16 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:52.139 11:03:16 -- nvmf/common.sh@50 -- # : 0 00:19:52.139 11:03:16 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:52.139 11:03:16 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:52.139 11:03:16 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:52.139 11:03:16 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.139 11:03:16 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.139 11:03:16 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:52.139 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:52.139 11:03:16 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:52.139 11:03:16 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:52.139 11:03:16 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:52.139 11:03:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:19:52.139 11:03:16 -- spdk/autotest.sh@32 -- # uname -s 00:19:52.139 11:03:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:19:52.139 11:03:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:19:52.139 11:03:16 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:52.139 11:03:16 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:19:52.139 11:03:16 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:52.139 11:03:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:19:52.139 11:03:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:19:52.139 11:03:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:19:52.139 11:03:16 -- spdk/autotest.sh@48 -- # udevadm_pid=56288 00:19:52.139 11:03:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:19:52.139 11:03:16 -- pm/common@17 -- # local monitor 00:19:52.139 11:03:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:52.139 11:03:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:52.139 11:03:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:19:52.139 11:03:16 -- pm/common@21 -- # date +%s 00:19:52.139 11:03:16 -- pm/common@25 -- # sleep 1 00:19:52.139 11:03:16 -- pm/common@21 -- # date +%s 00:19:52.139 11:03:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733396596 00:19:52.139 11:03:16 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733396596 00:19:52.139 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733396596_collect-cpu-load.pm.log 00:19:52.139 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733396596_collect-vmstat.pm.log 00:19:53.129 11:03:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:19:53.129 11:03:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:19:53.129 11:03:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.129 11:03:17 -- common/autotest_common.sh@10 -- # set +x 00:19:53.129 11:03:17 -- spdk/autotest.sh@59 -- # create_test_list 00:19:53.129 11:03:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:19:53.129 11:03:17 -- common/autotest_common.sh@10 -- # set +x 00:19:53.129 11:03:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:19:53.129 11:03:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:19:53.129 11:03:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:19:53.129 11:03:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:19:53.129 11:03:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:19:53.129 11:03:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:19:53.129 11:03:17 -- common/autotest_common.sh@1457 -- # uname 00:19:53.129 11:03:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:19:53.129 11:03:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:19:53.129 11:03:17 -- common/autotest_common.sh@1477 -- # uname 00:19:53.129 11:03:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:19:53.129 11:03:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:19:53.129 11:03:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:19:53.415 lcov: LCOV version 1.15 00:19:53.415 11:03:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:20:15.391 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:20:15.391 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:20:33.504 11:03:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:20:33.504 11:03:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.504 11:03:55 -- common/autotest_common.sh@10 -- # set +x 00:20:33.504 11:03:55 -- spdk/autotest.sh@78 -- # rm -f 00:20:33.504 11:03:55 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:33.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.504 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:33.504 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:33.504 11:03:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:20:33.504 11:03:56 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:33.504 11:03:56 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:33.504 11:03:56 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:20:33.504 11:03:56 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:20:33.504 11:03:56 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:20:33.504 11:03:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:33.504 11:03:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:20:33.504 11:03:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:33.504 11:03:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:20:33.504 11:03:56 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:33.504 11:03:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:33.504 11:03:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:33.504 11:03:56 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:20:33.504 11:03:56 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:20:33.504 11:03:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:33.504 11:03:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:20:33.504 11:03:56 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:33.504 11:03:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:33.504 11:03:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:33.504 11:03:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:33.504 11:03:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:20:33.504 11:03:56 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:20:33.504 11:03:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:20:33.504 11:03:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:33.504 11:03:56 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:20:33.504 11:03:56 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:20:33.504 11:03:56 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:20:33.504 11:03:56 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:20:33.504 11:03:56 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:33.504 11:03:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:20:33.504 11:03:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:33.504 11:03:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:33.504 11:03:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:20:33.504 11:03:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:20:33.504 11:03:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:20:33.504 No valid GPT data, bailing 00:20:33.504 11:03:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:33.504 11:03:56 -- scripts/common.sh@394 -- # pt= 00:20:33.504 11:03:56 -- scripts/common.sh@395 -- # return 1 00:20:33.504 11:03:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:20:33.504 1+0 records in 00:20:33.504 1+0 records out 00:20:33.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689586 s, 152 MB/s 00:20:33.504 11:03:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:33.504 11:03:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:33.504 11:03:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:20:33.504 11:03:56 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:20:33.504 11:03:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:20:33.504 No valid GPT data, bailing 00:20:33.504 11:03:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:33.504 11:03:56 -- scripts/common.sh@394 -- # pt= 00:20:33.504 11:03:56 -- scripts/common.sh@395 -- # return 1 00:20:33.504 11:03:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:20:33.504 1+0 records in 00:20:33.504 1+0 records out 00:20:33.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495662 s, 212 MB/s 00:20:33.504 11:03:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:33.505 11:03:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:33.505 11:03:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:20:33.505 11:03:56 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:20:33.505 11:03:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:20:33.505 No valid GPT data, bailing 00:20:33.505 11:03:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:33.505 11:03:56 -- scripts/common.sh@394 -- # pt= 00:20:33.505 11:03:56 -- scripts/common.sh@395 -- # return 1 00:20:33.505 11:03:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:20:33.505 1+0 records in 00:20:33.505 1+0 records out 00:20:33.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477801 s, 219 MB/s 00:20:33.505 11:03:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:33.505 11:03:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:33.505 11:03:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:20:33.505 11:03:56 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:20:33.505 11:03:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:20:33.505 No valid GPT data, bailing 00:20:33.505 11:03:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:33.505 11:03:56 -- scripts/common.sh@394 -- # pt= 00:20:33.505 11:03:56 -- scripts/common.sh@395 -- # return 1 00:20:33.505 11:03:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:20:33.505 1+0 records in 00:20:33.505 1+0 records out 00:20:33.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00305448 s, 343 MB/s 00:20:33.505 11:03:56 -- spdk/autotest.sh@105 -- # sync 00:20:33.505 11:03:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:20:33.505 11:03:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:20:33.505 11:03:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:20:34.880 11:03:59 -- spdk/autotest.sh@111 -- # uname -s 00:20:34.880 11:03:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:20:34.880 11:03:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:20:34.880 11:03:59 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:35.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.446 Hugepages 00:20:35.446 node hugesize free / total 00:20:35.446 node0 1048576kB 0 / 0 00:20:35.446 node0 2048kB 0 / 0 00:20:35.446 00:20:35.446 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:35.446 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:35.446 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:20:35.704 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:20:35.704 11:04:00 -- spdk/autotest.sh@117 -- # uname -s 00:20:35.704 11:04:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:20:35.704 11:04:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:20:35.704 11:04:00 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:36.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:36.529 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:36.529 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:36.529 11:04:01 -- common/autotest_common.sh@1517 -- # sleep 1 00:20:37.462 11:04:02 -- common/autotest_common.sh@1518 -- # bdfs=() 00:20:37.462 11:04:02 -- common/autotest_common.sh@1518 -- # local bdfs 00:20:37.462 11:04:02 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:20:37.462 11:04:02 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:20:37.462 11:04:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:37.462 11:04:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:37.462 11:04:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:37.462 11:04:02 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:37.462 11:04:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:37.720 11:04:02 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:37.720 11:04:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:37.720 11:04:02 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:37.977 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.977 Waiting for block devices as requested 00:20:37.977 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.235 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.235 11:04:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:38.235 11:04:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:20:38.235 11:04:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:20:38.235 11:04:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:20:38.235 11:04:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:38.235 11:04:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:38.235 11:04:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:38.235 11:04:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:38.235 11:04:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:38.235 11:04:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:38.235 11:04:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:38.235 11:04:02 -- common/autotest_common.sh@1543 -- # continue 00:20:38.235 11:04:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:38.235 11:04:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:20:38.235 11:04:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:20:38.235 11:04:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:20:38.235 11:04:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:38.235 11:04:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:20:38.235 11:04:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:38.235 11:04:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:20:38.235 11:04:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:20:38.235 11:04:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:38.235 11:04:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:38.235 11:04:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:38.235 11:04:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:38.514 11:04:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:38.514 11:04:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:20:38.514 11:04:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:38.514 11:04:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:38.514 11:04:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:38.514 11:04:02 -- common/autotest_common.sh@1543 -- # continue 00:20:38.514 11:04:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:20:38.514 11:04:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.514 11:04:02 -- common/autotest_common.sh@10 -- # set +x 00:20:38.514 11:04:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:20:38.514 11:04:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.514 11:04:02 -- common/autotest_common.sh@10 -- # set +x 00:20:38.514 11:04:02 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:39.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:39.338 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:39.339 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:39.339 11:04:03 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:20:39.339 11:04:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.339 11:04:03 -- common/autotest_common.sh@10 -- # set +x 00:20:39.339 11:04:03 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:20:39.339 11:04:03 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:20:39.339 11:04:03 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:20:39.339 11:04:03 -- common/autotest_common.sh@1563 -- # bdfs=() 00:20:39.339 11:04:03 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:20:39.339 11:04:03 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:20:39.339 11:04:03 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:20:39.339 11:04:03 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:20:39.339 11:04:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:39.339 11:04:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:39.339 11:04:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:39.339 11:04:03 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:39.339 11:04:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:39.597 11:04:04 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:20:39.597 11:04:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:39.597 11:04:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:39.597 11:04:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:20:39.597 11:04:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:39.597 11:04:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:39.597 11:04:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:39.597 11:04:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:20:39.597 11:04:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:39.597 11:04:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:39.597 11:04:04 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:20:39.597 11:04:04 -- common/autotest_common.sh@1572 -- # return 0 00:20:39.597 11:04:04 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:20:39.597 11:04:04 -- common/autotest_common.sh@1580 -- # return 0 00:20:39.597 11:04:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:20:39.597 11:04:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:20:39.597 11:04:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:20:39.597 11:04:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:20:39.597 11:04:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:20:39.597 11:04:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.597 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:20:39.597 11:04:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:20:39.597 11:04:04 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:39.597 11:04:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:39.597 11:04:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.597 11:04:04 -- common/autotest_common.sh@10 -- # set +x 00:20:39.597 ************************************ 00:20:39.597 START TEST env 00:20:39.597 ************************************ 00:20:39.597 11:04:04 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:39.597 * Looking for test storage... 00:20:39.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:20:39.597 11:04:04 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:39.597 11:04:04 env -- common/autotest_common.sh@1711 -- # lcov --version 00:20:39.597 11:04:04 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:39.936 11:04:04 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:39.936 11:04:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.936 11:04:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.937 11:04:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.937 11:04:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.937 11:04:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.937 11:04:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.937 11:04:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.937 11:04:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.937 11:04:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.937 11:04:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.937 11:04:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.937 11:04:04 env -- scripts/common.sh@344 -- # case "$op" in 00:20:39.937 11:04:04 env -- scripts/common.sh@345 -- # : 1 00:20:39.937 11:04:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.937 11:04:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.937 11:04:04 env -- scripts/common.sh@365 -- # decimal 1 00:20:39.937 11:04:04 env -- scripts/common.sh@353 -- # local d=1 00:20:39.937 11:04:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.937 11:04:04 env -- scripts/common.sh@355 -- # echo 1 00:20:39.937 11:04:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.937 11:04:04 env -- scripts/common.sh@366 -- # decimal 2 00:20:39.937 11:04:04 env -- scripts/common.sh@353 -- # local d=2 00:20:39.937 11:04:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.937 11:04:04 env -- scripts/common.sh@355 -- # echo 2 00:20:39.937 11:04:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.937 11:04:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.937 11:04:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.937 11:04:04 env -- scripts/common.sh@368 -- # return 0 00:20:39.937 11:04:04 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.937 11:04:04 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.937 --rc genhtml_branch_coverage=1 00:20:39.937 --rc genhtml_function_coverage=1 00:20:39.937 --rc genhtml_legend=1 00:20:39.937 --rc geninfo_all_blocks=1 00:20:39.937 --rc geninfo_unexecuted_blocks=1 00:20:39.937 00:20:39.937 ' 00:20:39.937 11:04:04 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.937 --rc genhtml_branch_coverage=1 00:20:39.937 --rc genhtml_function_coverage=1 00:20:39.937 --rc genhtml_legend=1 00:20:39.937 --rc geninfo_all_blocks=1 00:20:39.937 --rc geninfo_unexecuted_blocks=1 00:20:39.937 00:20:39.937 ' 00:20:39.937 11:04:04 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.937 --rc genhtml_branch_coverage=1 00:20:39.937 --rc genhtml_function_coverage=1 00:20:39.937 --rc genhtml_legend=1 00:20:39.937 --rc geninfo_all_blocks=1 00:20:39.937 --rc geninfo_unexecuted_blocks=1 00:20:39.937 00:20:39.937 ' 00:20:39.937 11:04:04 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.937 --rc genhtml_branch_coverage=1 00:20:39.937 --rc genhtml_function_coverage=1 00:20:39.937 --rc genhtml_legend=1 00:20:39.937 --rc geninfo_all_blocks=1 00:20:39.937 --rc geninfo_unexecuted_blocks=1 00:20:39.937 00:20:39.937 ' 00:20:39.937 11:04:04 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:39.937 11:04:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:39.937 11:04:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.937 11:04:04 env -- common/autotest_common.sh@10 -- # set +x 00:20:39.937 ************************************ 00:20:39.937 START TEST env_memory 00:20:39.937 ************************************ 00:20:39.937 11:04:04 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:39.937 00:20:39.937 00:20:39.937 CUnit - A unit testing framework for C - Version 2.1-3 00:20:39.937 http://cunit.sourceforge.net/ 00:20:39.937 00:20:39.937 00:20:39.937 Suite: memory 00:20:39.937 Test: alloc and free memory map ...[2024-12-05 11:04:04.372680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:20:39.937 passed 00:20:39.937 Test: mem map translation ...[2024-12-05 11:04:04.406085] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:20:39.937 [2024-12-05 11:04:04.406130] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:20:39.937 [2024-12-05 11:04:04.406189] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:20:39.937 [2024-12-05 11:04:04.406201] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:20:39.937 passed 00:20:39.937 Test: mem map registration ...[2024-12-05 11:04:04.469283] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:20:39.937 [2024-12-05 11:04:04.469334] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:20:39.937 passed 00:20:39.937 Test: mem map adjacent registrations ...passed 00:20:39.937 00:20:39.937 Run Summary: Type Total Ran Passed Failed Inactive 00:20:39.937 suites 1 1 n/a 0 0 00:20:39.937 tests 4 4 4 0 0 00:20:39.937 asserts 152 152 152 0 n/a 00:20:39.937 00:20:39.937 Elapsed time = 0.216 seconds 00:20:39.937 00:20:39.937 real 0m0.237s 00:20:39.937 user 0m0.220s 00:20:39.937 sys 0m0.013s 00:20:39.937 11:04:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.937 11:04:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:20:39.937 ************************************ 00:20:39.937 END TEST env_memory 00:20:39.937 ************************************ 00:20:40.243 11:04:04 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:40.243 11:04:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.243 11:04:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.243 11:04:04 env -- common/autotest_common.sh@10 -- # set +x 00:20:40.243 ************************************ 00:20:40.243 START TEST env_vtophys 00:20:40.243 ************************************ 00:20:40.243 11:04:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:40.243 EAL: lib.eal log level changed from notice to debug 00:20:40.243 EAL: Detected lcore 0 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 1 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 2 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 3 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 4 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 5 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 6 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 7 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 8 as core 0 on socket 0 00:20:40.243 EAL: Detected lcore 9 as core 0 on socket 0 00:20:40.243 EAL: Maximum logical cores by configuration: 128 00:20:40.243 EAL: Detected CPU lcores: 10 00:20:40.243 EAL: Detected NUMA nodes: 1 00:20:40.243 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:20:40.243 EAL: Detected shared linkage of DPDK 00:20:40.243 EAL: No shared files mode enabled, IPC will be disabled 00:20:40.243 EAL: Selected IOVA mode 'PA' 00:20:40.243 EAL: Probing VFIO support... 00:20:40.243 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:40.243 EAL: VFIO modules not loaded, skipping VFIO support... 00:20:40.243 EAL: Ask a virtual area of 0x2e000 bytes 00:20:40.243 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:20:40.243 EAL: Setting up physically contiguous memory... 00:20:40.243 EAL: Setting maximum number of open files to 524288 00:20:40.243 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:20:40.243 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:20:40.243 EAL: Ask a virtual area of 0x61000 bytes 00:20:40.243 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:20:40.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:40.243 EAL: Ask a virtual area of 0x400000000 bytes 00:20:40.243 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:20:40.243 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:20:40.243 EAL: Ask a virtual area of 0x61000 bytes 00:20:40.243 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:20:40.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:40.243 EAL: Ask a virtual area of 0x400000000 bytes 00:20:40.243 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:20:40.243 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:20:40.243 EAL: Ask a virtual area of 0x61000 bytes 00:20:40.243 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:20:40.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:40.243 EAL: Ask a virtual area of 0x400000000 bytes 00:20:40.243 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:20:40.243 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:20:40.243 EAL: Ask a virtual area of 0x61000 bytes 00:20:40.243 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:20:40.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:40.243 EAL: Ask a virtual area of 0x400000000 bytes 00:20:40.243 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:20:40.243 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:20:40.243 EAL: Hugepages will be freed exactly as allocated. 00:20:40.243 EAL: No shared files mode enabled, IPC is disabled 00:20:40.243 EAL: No shared files mode enabled, IPC is disabled 00:20:40.243 EAL: TSC frequency is ~2100000 KHz 00:20:40.243 EAL: Main lcore 0 is ready (tid=7f46bb900a00;cpuset=[0]) 00:20:40.243 EAL: Trying to obtain current memory policy. 00:20:40.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.243 EAL: Restoring previous memory policy: 0 00:20:40.243 EAL: request: mp_malloc_sync 00:20:40.243 EAL: No shared files mode enabled, IPC is disabled 00:20:40.243 EAL: Heap on socket 0 was expanded by 2MB 00:20:40.243 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:40.243 EAL: No PCI address specified using 'addr=' in: bus=pci 00:20:40.243 EAL: Mem event callback 'spdk:(nil)' registered 00:20:40.243 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:20:40.243 00:20:40.243 00:20:40.243 CUnit - A unit testing framework for C - Version 2.1-3 00:20:40.243 http://cunit.sourceforge.net/ 00:20:40.244 00:20:40.244 00:20:40.244 Suite: components_suite 00:20:40.244 Test: vtophys_malloc_test ...passed 00:20:40.244 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:20:40.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.244 EAL: Restoring previous memory policy: 4 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was expanded by 4MB 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was shrunk by 4MB 00:20:40.244 EAL: Trying to obtain current memory policy. 00:20:40.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.244 EAL: Restoring previous memory policy: 4 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was expanded by 6MB 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was shrunk by 6MB 00:20:40.244 EAL: Trying to obtain current memory policy. 00:20:40.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.244 EAL: Restoring previous memory policy: 4 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was expanded by 10MB 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was shrunk by 10MB 00:20:40.244 EAL: Trying to obtain current memory policy. 00:20:40.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.244 EAL: Restoring previous memory policy: 4 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was expanded by 18MB 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was shrunk by 18MB 00:20:40.244 EAL: Trying to obtain current memory policy. 00:20:40.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.244 EAL: Restoring previous memory policy: 4 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was expanded by 34MB 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was shrunk by 34MB 00:20:40.244 EAL: Trying to obtain current memory policy. 00:20:40.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.244 EAL: Restoring previous memory policy: 4 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was expanded by 66MB 00:20:40.244 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.244 EAL: request: mp_malloc_sync 00:20:40.244 EAL: No shared files mode enabled, IPC is disabled 00:20:40.244 EAL: Heap on socket 0 was shrunk by 66MB 00:20:40.244 EAL: Trying to obtain current memory policy. 00:20:40.244 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.503 EAL: Restoring previous memory policy: 4 00:20:40.503 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.503 EAL: request: mp_malloc_sync 00:20:40.503 EAL: No shared files mode enabled, IPC is disabled 00:20:40.503 EAL: Heap on socket 0 was expanded by 130MB 00:20:40.503 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.503 EAL: request: mp_malloc_sync 00:20:40.503 EAL: No shared files mode enabled, IPC is disabled 00:20:40.503 EAL: Heap on socket 0 was shrunk by 130MB 00:20:40.503 EAL: Trying to obtain current memory policy. 00:20:40.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.503 EAL: Restoring previous memory policy: 4 00:20:40.503 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.503 EAL: request: mp_malloc_sync 00:20:40.503 EAL: No shared files mode enabled, IPC is disabled 00:20:40.503 EAL: Heap on socket 0 was expanded by 258MB 00:20:40.503 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.503 EAL: request: mp_malloc_sync 00:20:40.503 EAL: No shared files mode enabled, IPC is disabled 00:20:40.503 EAL: Heap on socket 0 was shrunk by 258MB 00:20:40.503 EAL: Trying to obtain current memory policy. 00:20:40.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.760 EAL: Restoring previous memory policy: 4 00:20:40.760 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.760 EAL: request: mp_malloc_sync 00:20:40.760 EAL: No shared files mode enabled, IPC is disabled 00:20:40.760 EAL: Heap on socket 0 was expanded by 514MB 00:20:40.760 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.760 EAL: request: mp_malloc_sync 00:20:40.760 EAL: No shared files mode enabled, IPC is disabled 00:20:40.760 EAL: Heap on socket 0 was shrunk by 514MB 00:20:40.760 EAL: Trying to obtain current memory policy. 00:20:40.760 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:41.017 EAL: Restoring previous memory policy: 4 00:20:41.017 EAL: Calling mem event callback 'spdk:(nil)' 00:20:41.017 EAL: request: mp_malloc_sync 00:20:41.017 EAL: No shared files mode enabled, IPC is disabled 00:20:41.017 EAL: Heap on socket 0 was expanded by 1026MB 00:20:41.274 EAL: Calling mem event callback 'spdk:(nil)' 00:20:41.274 EAL: request: mp_malloc_sync 00:20:41.275 EAL: No shared files mode enabled, IPC is disabled 00:20:41.275 EAL: Heap on socket 0 was shrunk by 1026MB 00:20:41.275 passed 00:20:41.275 00:20:41.275 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.275 suites 1 1 n/a 0 0 00:20:41.275 tests 2 2 2 0 0 00:20:41.275 asserts 5470 5470 5470 0 n/a 00:20:41.275 00:20:41.275 Elapsed time = 1.014 seconds 00:20:41.275 EAL: Calling mem event callback 'spdk:(nil)' 00:20:41.275 EAL: request: mp_malloc_sync 00:20:41.275 EAL: No shared files mode enabled, IPC is disabled 00:20:41.275 EAL: Heap on socket 0 was shrunk by 2MB 00:20:41.275 EAL: No shared files mode enabled, IPC is disabled 00:20:41.275 EAL: No shared files mode enabled, IPC is disabled 00:20:41.275 EAL: No shared files mode enabled, IPC is disabled 00:20:41.275 00:20:41.275 real 0m1.244s 00:20:41.275 user 0m0.661s 00:20:41.275 sys 0m0.449s 00:20:41.275 11:04:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.275 ************************************ 00:20:41.275 END TEST env_vtophys 00:20:41.275 11:04:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:20:41.275 ************************************ 00:20:41.275 11:04:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:20:41.275 11:04:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.275 11:04:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.275 11:04:05 env -- common/autotest_common.sh@10 -- # set +x 00:20:41.275 ************************************ 00:20:41.275 START TEST env_pci 00:20:41.275 ************************************ 00:20:41.275 11:04:05 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:20:41.532 00:20:41.532 00:20:41.532 CUnit - A unit testing framework for C - Version 2.1-3 00:20:41.532 http://cunit.sourceforge.net/ 00:20:41.532 00:20:41.532 00:20:41.532 Suite: pci 00:20:41.532 Test: pci_hook ...[2024-12-05 11:04:05.938518] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58589 has claimed it 00:20:41.532 passed 00:20:41.532 00:20:41.532 EAL: Cannot find device (10000:00:01.0) 00:20:41.532 EAL: Failed to attach device on primary process 00:20:41.532 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.532 suites 1 1 n/a 0 0 00:20:41.532 tests 1 1 1 0 0 00:20:41.532 asserts 25 25 25 0 n/a 00:20:41.532 00:20:41.532 Elapsed time = 0.002 seconds 00:20:41.532 00:20:41.532 real 0m0.021s 00:20:41.532 user 0m0.013s 00:20:41.532 sys 0m0.008s 00:20:41.532 11:04:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.532 11:04:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:20:41.532 ************************************ 00:20:41.532 END TEST env_pci 00:20:41.532 ************************************ 00:20:41.532 11:04:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:20:41.532 11:04:05 env -- env/env.sh@15 -- # uname 00:20:41.532 11:04:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:20:41.532 11:04:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:20:41.532 11:04:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:20:41.532 11:04:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:41.532 11:04:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.532 11:04:06 env -- common/autotest_common.sh@10 -- # set +x 00:20:41.532 ************************************ 00:20:41.532 START TEST env_dpdk_post_init 00:20:41.532 ************************************ 00:20:41.532 11:04:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:20:41.532 EAL: Detected CPU lcores: 10 00:20:41.532 EAL: Detected NUMA nodes: 1 00:20:41.532 EAL: Detected shared linkage of DPDK 00:20:41.532 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:20:41.532 EAL: Selected IOVA mode 'PA' 00:20:41.532 TELEMETRY: No legacy callbacks, legacy socket not created 00:20:41.791 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:20:41.791 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:20:41.791 Starting DPDK initialization... 00:20:41.791 Starting SPDK post initialization... 00:20:41.791 SPDK NVMe probe 00:20:41.791 Attaching to 0000:00:10.0 00:20:41.791 Attaching to 0000:00:11.0 00:20:41.791 Attached to 0000:00:10.0 00:20:41.791 Attached to 0000:00:11.0 00:20:41.791 Cleaning up... 00:20:41.791 00:20:41.791 real 0m0.202s 00:20:41.791 user 0m0.056s 00:20:41.791 sys 0m0.046s 00:20:41.791 11:04:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.791 ************************************ 00:20:41.791 END TEST env_dpdk_post_init 00:20:41.791 11:04:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:20:41.791 ************************************ 00:20:41.791 11:04:06 env -- env/env.sh@26 -- # uname 00:20:41.791 11:04:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:20:41.791 11:04:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:20:41.791 11:04:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:41.791 11:04:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.791 11:04:06 env -- common/autotest_common.sh@10 -- # set +x 00:20:41.791 ************************************ 00:20:41.791 START TEST env_mem_callbacks 00:20:41.791 ************************************ 00:20:41.791 11:04:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:20:41.791 EAL: Detected CPU lcores: 10 00:20:41.791 EAL: Detected NUMA nodes: 1 00:20:41.791 EAL: Detected shared linkage of DPDK 00:20:41.791 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:20:41.791 EAL: Selected IOVA mode 'PA' 00:20:41.791 TELEMETRY: No legacy callbacks, legacy socket not created 00:20:41.791 00:20:41.791 00:20:41.791 CUnit - A unit testing framework for C - Version 2.1-3 00:20:41.791 http://cunit.sourceforge.net/ 00:20:41.791 00:20:41.791 00:20:41.791 Suite: memory 00:20:41.791 Test: test ... 00:20:41.791 register 0x200000200000 2097152 00:20:41.791 malloc 3145728 00:20:41.791 register 0x200000400000 4194304 00:20:41.791 buf 0x200000500000 len 3145728 PASSED 00:20:41.791 malloc 64 00:20:41.791 buf 0x2000004fff40 len 64 PASSED 00:20:41.791 malloc 4194304 00:20:41.791 register 0x200000800000 6291456 00:20:41.791 buf 0x200000a00000 len 4194304 PASSED 00:20:41.791 free 0x200000500000 3145728 00:20:41.791 free 0x2000004fff40 64 00:20:41.791 unregister 0x200000400000 4194304 PASSED 00:20:41.791 free 0x200000a00000 4194304 00:20:41.791 unregister 0x200000800000 6291456 PASSED 00:20:41.791 malloc 8388608 00:20:41.791 register 0x200000400000 10485760 00:20:41.791 buf 0x200000600000 len 8388608 PASSED 00:20:41.791 free 0x200000600000 8388608 00:20:41.791 unregister 0x200000400000 10485760 PASSED 00:20:41.791 passed 00:20:41.791 00:20:41.791 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.791 suites 1 1 n/a 0 0 00:20:41.791 tests 1 1 1 0 0 00:20:41.791 asserts 15 15 15 0 n/a 00:20:41.791 00:20:41.791 Elapsed time = 0.008 seconds 00:20:41.791 00:20:41.791 real 0m0.151s 00:20:41.791 user 0m0.023s 00:20:41.791 sys 0m0.028s 00:20:41.791 11:04:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.791 11:04:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:20:41.791 ************************************ 00:20:41.791 END TEST env_mem_callbacks 00:20:41.791 ************************************ 00:20:42.050 00:20:42.050 real 0m2.393s 00:20:42.050 user 0m1.195s 00:20:42.050 sys 0m0.852s 00:20:42.050 11:04:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.050 11:04:06 env -- common/autotest_common.sh@10 -- # set +x 00:20:42.050 ************************************ 00:20:42.050 END TEST env 00:20:42.050 ************************************ 00:20:42.050 11:04:06 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:20:42.050 11:04:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:42.050 11:04:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.050 11:04:06 -- common/autotest_common.sh@10 -- # set +x 00:20:42.050 ************************************ 00:20:42.050 START TEST rpc 00:20:42.050 ************************************ 00:20:42.050 11:04:06 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:20:42.050 * Looking for test storage... 00:20:42.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:20:42.050 11:04:06 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.050 11:04:06 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.050 11:04:06 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.308 11:04:06 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.308 11:04:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.308 11:04:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.308 11:04:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.308 11:04:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.308 11:04:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.308 11:04:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.308 11:04:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.308 11:04:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.308 11:04:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.308 11:04:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.308 11:04:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.308 11:04:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:42.308 11:04:06 rpc -- scripts/common.sh@345 -- # : 1 00:20:42.308 11:04:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.308 11:04:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.308 11:04:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:20:42.309 11:04:06 rpc -- scripts/common.sh@353 -- # local d=1 00:20:42.309 11:04:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.309 11:04:06 rpc -- scripts/common.sh@355 -- # echo 1 00:20:42.309 11:04:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.309 11:04:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:20:42.309 11:04:06 rpc -- scripts/common.sh@353 -- # local d=2 00:20:42.309 11:04:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.309 11:04:06 rpc -- scripts/common.sh@355 -- # echo 2 00:20:42.309 11:04:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.309 11:04:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.309 11:04:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.309 11:04:06 rpc -- scripts/common.sh@368 -- # return 0 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.309 --rc genhtml_branch_coverage=1 00:20:42.309 --rc genhtml_function_coverage=1 00:20:42.309 --rc genhtml_legend=1 00:20:42.309 --rc geninfo_all_blocks=1 00:20:42.309 --rc geninfo_unexecuted_blocks=1 00:20:42.309 00:20:42.309 ' 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.309 --rc genhtml_branch_coverage=1 00:20:42.309 --rc genhtml_function_coverage=1 00:20:42.309 --rc genhtml_legend=1 00:20:42.309 --rc geninfo_all_blocks=1 00:20:42.309 --rc geninfo_unexecuted_blocks=1 00:20:42.309 00:20:42.309 ' 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.309 --rc genhtml_branch_coverage=1 00:20:42.309 --rc genhtml_function_coverage=1 00:20:42.309 --rc genhtml_legend=1 00:20:42.309 --rc geninfo_all_blocks=1 00:20:42.309 --rc geninfo_unexecuted_blocks=1 00:20:42.309 00:20:42.309 ' 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.309 --rc genhtml_branch_coverage=1 00:20:42.309 --rc genhtml_function_coverage=1 00:20:42.309 --rc genhtml_legend=1 00:20:42.309 --rc geninfo_all_blocks=1 00:20:42.309 --rc geninfo_unexecuted_blocks=1 00:20:42.309 00:20:42.309 ' 00:20:42.309 11:04:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58711 00:20:42.309 11:04:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:20:42.309 11:04:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:42.309 11:04:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58711 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 58711 ']' 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.309 11:04:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.309 [2024-12-05 11:04:06.813087] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:42.309 [2024-12-05 11:04:06.813187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58711 ] 00:20:42.567 [2024-12-05 11:04:06.968622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.567 [2024-12-05 11:04:07.032279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:20:42.567 [2024-12-05 11:04:07.032341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58711' to capture a snapshot of events at runtime. 00:20:42.567 [2024-12-05 11:04:07.032357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.567 [2024-12-05 11:04:07.032370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.567 [2024-12-05 11:04:07.032382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58711 for offline analysis/debug. 00:20:42.567 [2024-12-05 11:04:07.032802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.826 11:04:07 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.826 11:04:07 rpc -- common/autotest_common.sh@868 -- # return 0 00:20:42.826 11:04:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:42.826 11:04:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:42.826 11:04:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:20:42.826 11:04:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:20:42.826 11:04:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:42.826 11:04:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.826 11:04:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.826 ************************************ 00:20:42.826 START TEST rpc_integrity 00:20:42.826 ************************************ 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:42.826 { 00:20:42.826 "aliases": [ 00:20:42.826 "3060e0cf-bf2a-455b-be2a-18036a189425" 00:20:42.826 ], 00:20:42.826 "assigned_rate_limits": { 00:20:42.826 "r_mbytes_per_sec": 0, 00:20:42.826 "rw_ios_per_sec": 0, 00:20:42.826 "rw_mbytes_per_sec": 0, 00:20:42.826 "w_mbytes_per_sec": 0 00:20:42.826 }, 00:20:42.826 "block_size": 512, 00:20:42.826 "claimed": false, 00:20:42.826 "driver_specific": {}, 00:20:42.826 "memory_domains": [ 00:20:42.826 { 00:20:42.826 "dma_device_id": "system", 00:20:42.826 "dma_device_type": 1 00:20:42.826 }, 00:20:42.826 { 00:20:42.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.826 "dma_device_type": 2 00:20:42.826 } 00:20:42.826 ], 00:20:42.826 "name": "Malloc0", 00:20:42.826 "num_blocks": 16384, 00:20:42.826 "product_name": "Malloc disk", 00:20:42.826 "supported_io_types": { 00:20:42.826 "abort": true, 00:20:42.826 "compare": false, 00:20:42.826 "compare_and_write": false, 00:20:42.826 "copy": true, 00:20:42.826 "flush": true, 00:20:42.826 "get_zone_info": false, 00:20:42.826 "nvme_admin": false, 00:20:42.826 "nvme_io": false, 00:20:42.826 "nvme_io_md": false, 00:20:42.826 "nvme_iov_md": false, 00:20:42.826 "read": true, 00:20:42.826 "reset": true, 00:20:42.826 "seek_data": false, 00:20:42.826 "seek_hole": false, 00:20:42.826 "unmap": true, 00:20:42.826 "write": true, 00:20:42.826 "write_zeroes": true, 00:20:42.826 "zcopy": true, 00:20:42.826 "zone_append": false, 00:20:42.826 "zone_management": false 00:20:42.826 }, 00:20:42.826 "uuid": "3060e0cf-bf2a-455b-be2a-18036a189425", 00:20:42.826 "zoned": false 00:20:42.826 } 00:20:42.826 ]' 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:42.826 [2024-12-05 11:04:07.457025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:20:42.826 [2024-12-05 11:04:07.457073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.826 [2024-12-05 11:04:07.457090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21968a0 00:20:42.826 [2024-12-05 11:04:07.457099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.826 [2024-12-05 11:04:07.458571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.826 [2024-12-05 11:04:07.458613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:42.826 Passthru0 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.826 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.826 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.086 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.086 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:43.086 { 00:20:43.086 "aliases": [ 00:20:43.086 "3060e0cf-bf2a-455b-be2a-18036a189425" 00:20:43.086 ], 00:20:43.086 "assigned_rate_limits": { 00:20:43.086 "r_mbytes_per_sec": 0, 00:20:43.086 "rw_ios_per_sec": 0, 00:20:43.086 "rw_mbytes_per_sec": 0, 00:20:43.086 "w_mbytes_per_sec": 0 00:20:43.086 }, 00:20:43.086 "block_size": 512, 00:20:43.086 "claim_type": "exclusive_write", 00:20:43.086 "claimed": true, 00:20:43.086 "driver_specific": {}, 00:20:43.086 "memory_domains": [ 00:20:43.086 { 00:20:43.086 "dma_device_id": "system", 00:20:43.086 "dma_device_type": 1 00:20:43.086 }, 00:20:43.086 { 00:20:43.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.086 "dma_device_type": 2 00:20:43.086 } 00:20:43.086 ], 00:20:43.086 "name": "Malloc0", 00:20:43.086 "num_blocks": 16384, 00:20:43.086 "product_name": "Malloc disk", 00:20:43.086 "supported_io_types": { 00:20:43.086 "abort": true, 00:20:43.086 "compare": false, 00:20:43.086 "compare_and_write": false, 00:20:43.086 "copy": true, 00:20:43.086 "flush": true, 00:20:43.086 "get_zone_info": false, 00:20:43.086 "nvme_admin": false, 00:20:43.086 "nvme_io": false, 00:20:43.086 "nvme_io_md": false, 00:20:43.086 "nvme_iov_md": false, 00:20:43.086 "read": true, 00:20:43.086 "reset": true, 00:20:43.086 "seek_data": false, 00:20:43.086 "seek_hole": false, 00:20:43.086 "unmap": true, 00:20:43.086 "write": true, 00:20:43.086 "write_zeroes": true, 00:20:43.086 "zcopy": true, 00:20:43.086 "zone_append": false, 00:20:43.086 "zone_management": false 00:20:43.086 }, 00:20:43.086 "uuid": "3060e0cf-bf2a-455b-be2a-18036a189425", 00:20:43.086 "zoned": false 00:20:43.086 }, 00:20:43.086 { 00:20:43.086 "aliases": [ 00:20:43.086 "82dc3c32-ee5c-580d-bf11-89aa20738c06" 00:20:43.086 ], 00:20:43.086 "assigned_rate_limits": { 00:20:43.086 "r_mbytes_per_sec": 0, 00:20:43.086 "rw_ios_per_sec": 0, 00:20:43.086 "rw_mbytes_per_sec": 0, 00:20:43.086 "w_mbytes_per_sec": 0 00:20:43.086 }, 00:20:43.086 "block_size": 512, 00:20:43.086 "claimed": false, 00:20:43.086 "driver_specific": { 00:20:43.086 "passthru": { 00:20:43.086 "base_bdev_name": "Malloc0", 00:20:43.086 "name": "Passthru0" 00:20:43.086 } 00:20:43.086 }, 00:20:43.086 "memory_domains": [ 00:20:43.086 { 00:20:43.086 "dma_device_id": "system", 00:20:43.086 "dma_device_type": 1 00:20:43.086 }, 00:20:43.086 { 00:20:43.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.086 "dma_device_type": 2 00:20:43.086 } 00:20:43.086 ], 00:20:43.086 "name": "Passthru0", 00:20:43.086 "num_blocks": 16384, 00:20:43.086 "product_name": "passthru", 00:20:43.086 "supported_io_types": { 00:20:43.086 "abort": true, 00:20:43.086 "compare": false, 00:20:43.086 "compare_and_write": false, 00:20:43.086 "copy": true, 00:20:43.086 "flush": true, 00:20:43.086 "get_zone_info": false, 00:20:43.086 "nvme_admin": false, 00:20:43.086 "nvme_io": false, 00:20:43.086 "nvme_io_md": false, 00:20:43.086 "nvme_iov_md": false, 00:20:43.086 "read": true, 00:20:43.086 "reset": true, 00:20:43.086 "seek_data": false, 00:20:43.086 "seek_hole": false, 00:20:43.086 "unmap": true, 00:20:43.086 "write": true, 00:20:43.086 "write_zeroes": true, 00:20:43.086 "zcopy": true, 00:20:43.086 "zone_append": false, 00:20:43.086 "zone_management": false 00:20:43.086 }, 00:20:43.086 "uuid": "82dc3c32-ee5c-580d-bf11-89aa20738c06", 00:20:43.086 "zoned": false 00:20:43.086 } 00:20:43.086 ]' 00:20:43.086 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:20:43.086 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:43.086 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:43.086 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.086 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.086 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.086 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:43.086 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.086 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.086 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.086 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:43.087 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.087 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.087 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.087 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:43.087 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:20:43.087 11:04:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:43.087 ************************************ 00:20:43.087 END TEST rpc_integrity 00:20:43.087 ************************************ 00:20:43.087 00:20:43.087 real 0m0.313s 00:20:43.087 user 0m0.175s 00:20:43.087 sys 0m0.057s 00:20:43.087 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.087 11:04:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.087 11:04:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:20:43.087 11:04:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.087 11:04:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.087 11:04:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.087 ************************************ 00:20:43.087 START TEST rpc_plugins 00:20:43.087 ************************************ 00:20:43.087 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:20:43.087 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:20:43.087 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.087 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:43.087 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.087 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:20:43.087 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:20:43.087 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.087 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:43.087 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.087 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:20:43.087 { 00:20:43.087 "aliases": [ 00:20:43.087 "e05bb923-4aab-4ff4-9a93-e9f329061c8e" 00:20:43.087 ], 00:20:43.087 "assigned_rate_limits": { 00:20:43.087 "r_mbytes_per_sec": 0, 00:20:43.087 "rw_ios_per_sec": 0, 00:20:43.087 "rw_mbytes_per_sec": 0, 00:20:43.087 "w_mbytes_per_sec": 0 00:20:43.087 }, 00:20:43.087 "block_size": 4096, 00:20:43.087 "claimed": false, 00:20:43.087 "driver_specific": {}, 00:20:43.087 "memory_domains": [ 00:20:43.087 { 00:20:43.087 "dma_device_id": "system", 00:20:43.087 "dma_device_type": 1 00:20:43.087 }, 00:20:43.087 { 00:20:43.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.087 "dma_device_type": 2 00:20:43.087 } 00:20:43.087 ], 00:20:43.087 "name": "Malloc1", 00:20:43.087 "num_blocks": 256, 00:20:43.087 "product_name": "Malloc disk", 00:20:43.087 "supported_io_types": { 00:20:43.087 "abort": true, 00:20:43.087 "compare": false, 00:20:43.087 "compare_and_write": false, 00:20:43.087 "copy": true, 00:20:43.087 "flush": true, 00:20:43.087 "get_zone_info": false, 00:20:43.087 "nvme_admin": false, 00:20:43.087 "nvme_io": false, 00:20:43.087 "nvme_io_md": false, 00:20:43.087 "nvme_iov_md": false, 00:20:43.087 "read": true, 00:20:43.087 "reset": true, 00:20:43.087 "seek_data": false, 00:20:43.087 "seek_hole": false, 00:20:43.087 "unmap": true, 00:20:43.087 "write": true, 00:20:43.087 "write_zeroes": true, 00:20:43.087 "zcopy": true, 00:20:43.087 "zone_append": false, 00:20:43.087 "zone_management": false 00:20:43.087 }, 00:20:43.087 "uuid": "e05bb923-4aab-4ff4-9a93-e9f329061c8e", 00:20:43.087 "zoned": false 00:20:43.087 } 00:20:43.087 ]' 00:20:43.087 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:20:43.356 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:20:43.356 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.356 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.356 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:20:43.356 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:20:43.356 ************************************ 00:20:43.356 END TEST rpc_plugins 00:20:43.356 ************************************ 00:20:43.356 11:04:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:20:43.356 00:20:43.356 real 0m0.150s 00:20:43.356 user 0m0.089s 00:20:43.356 sys 0m0.022s 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.356 11:04:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:43.356 11:04:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:20:43.356 11:04:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.356 11:04:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.356 11:04:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.356 ************************************ 00:20:43.356 START TEST rpc_trace_cmd_test 00:20:43.356 ************************************ 00:20:43.356 11:04:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:20:43.356 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:20:43.356 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:20:43.356 11:04:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.356 11:04:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.356 11:04:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.356 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:20:43.356 "bdev": { 00:20:43.356 "mask": "0x8", 00:20:43.356 "tpoint_mask": "0xffffffffffffffff" 00:20:43.356 }, 00:20:43.356 "bdev_nvme": { 00:20:43.356 "mask": "0x4000", 00:20:43.356 "tpoint_mask": "0x0" 00:20:43.356 }, 00:20:43.356 "bdev_raid": { 00:20:43.356 "mask": "0x20000", 00:20:43.356 "tpoint_mask": "0x0" 00:20:43.356 }, 00:20:43.356 "blob": { 00:20:43.356 "mask": "0x10000", 00:20:43.356 "tpoint_mask": "0x0" 00:20:43.356 }, 00:20:43.356 "blobfs": { 00:20:43.356 "mask": "0x80", 00:20:43.356 "tpoint_mask": "0x0" 00:20:43.356 }, 00:20:43.356 "dsa": { 00:20:43.356 "mask": "0x200", 00:20:43.356 "tpoint_mask": "0x0" 00:20:43.356 }, 00:20:43.356 "ftl": { 00:20:43.356 "mask": "0x40", 00:20:43.356 "tpoint_mask": "0x0" 00:20:43.356 }, 00:20:43.356 "iaa": { 00:20:43.356 "mask": "0x1000", 00:20:43.356 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "iscsi_conn": { 00:20:43.357 "mask": "0x2", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "nvme_pcie": { 00:20:43.357 "mask": "0x800", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "nvme_tcp": { 00:20:43.357 "mask": "0x2000", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "nvmf_rdma": { 00:20:43.357 "mask": "0x10", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "nvmf_tcp": { 00:20:43.357 "mask": "0x20", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "scheduler": { 00:20:43.357 "mask": "0x40000", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "scsi": { 00:20:43.357 "mask": "0x4", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "sock": { 00:20:43.357 "mask": "0x8000", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "thread": { 00:20:43.357 "mask": "0x400", 00:20:43.357 "tpoint_mask": "0x0" 00:20:43.357 }, 00:20:43.357 "tpoint_group_mask": "0x8", 00:20:43.357 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58711" 00:20:43.357 }' 00:20:43.357 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:20:43.357 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:20:43.357 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:20:43.357 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:20:43.357 11:04:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:20:43.357 11:04:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:20:43.357 11:04:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:20:43.615 11:04:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:20:43.615 11:04:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:20:43.615 ************************************ 00:20:43.615 END TEST rpc_trace_cmd_test 00:20:43.615 ************************************ 00:20:43.615 11:04:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:20:43.615 00:20:43.615 real 0m0.184s 00:20:43.615 user 0m0.155s 00:20:43.615 sys 0m0.021s 00:20:43.615 11:04:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.615 11:04:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.615 11:04:08 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:20:43.615 11:04:08 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:20:43.615 11:04:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.615 11:04:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.615 11:04:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.615 ************************************ 00:20:43.615 START TEST go_rpc 00:20:43.615 ************************************ 00:20:43.615 11:04:08 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:20:43.615 11:04:08 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.615 11:04:08 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.615 11:04:08 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["b53c0a44-3d9c-4de1-8087-19a22129bf03"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"b53c0a44-3d9c-4de1-8087-19a22129bf03","zoned":false}]' 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:43.615 11:04:08 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.615 11:04:08 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.615 11:04:08 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.615 11:04:08 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:20:43.874 11:04:08 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:20:43.874 11:04:08 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:20:43.874 ************************************ 00:20:43.874 END TEST go_rpc 00:20:43.874 ************************************ 00:20:43.874 11:04:08 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:20:43.874 00:20:43.874 real 0m0.181s 00:20:43.874 user 0m0.114s 00:20:43.874 sys 0m0.039s 00:20:43.874 11:04:08 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.874 11:04:08 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.874 11:04:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:20:43.874 11:04:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:20:43.874 11:04:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.874 11:04:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.874 11:04:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.874 ************************************ 00:20:43.874 START TEST rpc_daemon_integrity 00:20:43.874 ************************************ 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:43.874 { 00:20:43.874 "aliases": [ 00:20:43.874 "8ccce69d-f912-4ee5-9526-9f50bcaf3e8e" 00:20:43.874 ], 00:20:43.874 "assigned_rate_limits": { 00:20:43.874 "r_mbytes_per_sec": 0, 00:20:43.874 "rw_ios_per_sec": 0, 00:20:43.874 "rw_mbytes_per_sec": 0, 00:20:43.874 "w_mbytes_per_sec": 0 00:20:43.874 }, 00:20:43.874 "block_size": 512, 00:20:43.874 "claimed": false, 00:20:43.874 "driver_specific": {}, 00:20:43.874 "memory_domains": [ 00:20:43.874 { 00:20:43.874 "dma_device_id": "system", 00:20:43.874 "dma_device_type": 1 00:20:43.874 }, 00:20:43.874 { 00:20:43.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.874 "dma_device_type": 2 00:20:43.874 } 00:20:43.874 ], 00:20:43.874 "name": "Malloc3", 00:20:43.874 "num_blocks": 16384, 00:20:43.874 "product_name": "Malloc disk", 00:20:43.874 "supported_io_types": { 00:20:43.874 "abort": true, 00:20:43.874 "compare": false, 00:20:43.874 "compare_and_write": false, 00:20:43.874 "copy": true, 00:20:43.874 "flush": true, 00:20:43.874 "get_zone_info": false, 00:20:43.874 "nvme_admin": false, 00:20:43.874 "nvme_io": false, 00:20:43.874 "nvme_io_md": false, 00:20:43.874 "nvme_iov_md": false, 00:20:43.874 "read": true, 00:20:43.874 "reset": true, 00:20:43.874 "seek_data": false, 00:20:43.874 "seek_hole": false, 00:20:43.874 "unmap": true, 00:20:43.874 "write": true, 00:20:43.874 "write_zeroes": true, 00:20:43.874 "zcopy": true, 00:20:43.874 "zone_append": false, 00:20:43.874 "zone_management": false 00:20:43.874 }, 00:20:43.874 "uuid": "8ccce69d-f912-4ee5-9526-9f50bcaf3e8e", 00:20:43.874 "zoned": false 00:20:43.874 } 00:20:43.874 ]' 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:43.874 [2024-12-05 11:04:08.489333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:20:43.874 [2024-12-05 11:04:08.489518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.874 [2024-12-05 11:04:08.489546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2055230 00:20:43.874 [2024-12-05 11:04:08.489556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.874 [2024-12-05 11:04:08.490985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.874 [2024-12-05 11:04:08.491020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:43.874 Passthru0 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.874 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:44.133 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.133 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:44.133 { 00:20:44.134 "aliases": [ 00:20:44.134 "8ccce69d-f912-4ee5-9526-9f50bcaf3e8e" 00:20:44.134 ], 00:20:44.134 "assigned_rate_limits": { 00:20:44.134 "r_mbytes_per_sec": 0, 00:20:44.134 "rw_ios_per_sec": 0, 00:20:44.134 "rw_mbytes_per_sec": 0, 00:20:44.134 "w_mbytes_per_sec": 0 00:20:44.134 }, 00:20:44.134 "block_size": 512, 00:20:44.134 "claim_type": "exclusive_write", 00:20:44.134 "claimed": true, 00:20:44.134 "driver_specific": {}, 00:20:44.134 "memory_domains": [ 00:20:44.134 { 00:20:44.134 "dma_device_id": "system", 00:20:44.134 "dma_device_type": 1 00:20:44.134 }, 00:20:44.134 { 00:20:44.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.134 "dma_device_type": 2 00:20:44.134 } 00:20:44.134 ], 00:20:44.134 "name": "Malloc3", 00:20:44.134 "num_blocks": 16384, 00:20:44.134 "product_name": "Malloc disk", 00:20:44.134 "supported_io_types": { 00:20:44.134 "abort": true, 00:20:44.134 "compare": false, 00:20:44.134 "compare_and_write": false, 00:20:44.134 "copy": true, 00:20:44.134 "flush": true, 00:20:44.134 "get_zone_info": false, 00:20:44.134 "nvme_admin": false, 00:20:44.134 "nvme_io": false, 00:20:44.134 "nvme_io_md": false, 00:20:44.134 "nvme_iov_md": false, 00:20:44.134 "read": true, 00:20:44.134 "reset": true, 00:20:44.134 "seek_data": false, 00:20:44.134 "seek_hole": false, 00:20:44.134 "unmap": true, 00:20:44.134 "write": true, 00:20:44.134 "write_zeroes": true, 00:20:44.134 "zcopy": true, 00:20:44.134 "zone_append": false, 00:20:44.134 "zone_management": false 00:20:44.134 }, 00:20:44.134 "uuid": "8ccce69d-f912-4ee5-9526-9f50bcaf3e8e", 00:20:44.134 "zoned": false 00:20:44.134 }, 00:20:44.134 { 00:20:44.134 "aliases": [ 00:20:44.134 "99f45b9d-b416-52cb-b818-14c983aa7d24" 00:20:44.134 ], 00:20:44.134 "assigned_rate_limits": { 00:20:44.134 "r_mbytes_per_sec": 0, 00:20:44.134 "rw_ios_per_sec": 0, 00:20:44.134 "rw_mbytes_per_sec": 0, 00:20:44.134 "w_mbytes_per_sec": 0 00:20:44.134 }, 00:20:44.134 "block_size": 512, 00:20:44.134 "claimed": false, 00:20:44.134 "driver_specific": { 00:20:44.134 "passthru": { 00:20:44.134 "base_bdev_name": "Malloc3", 00:20:44.134 "name": "Passthru0" 00:20:44.134 } 00:20:44.134 }, 00:20:44.134 "memory_domains": [ 00:20:44.134 { 00:20:44.134 "dma_device_id": "system", 00:20:44.134 "dma_device_type": 1 00:20:44.134 }, 00:20:44.134 { 00:20:44.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.134 "dma_device_type": 2 00:20:44.134 } 00:20:44.134 ], 00:20:44.134 "name": "Passthru0", 00:20:44.134 "num_blocks": 16384, 00:20:44.134 "product_name": "passthru", 00:20:44.134 "supported_io_types": { 00:20:44.134 "abort": true, 00:20:44.134 "compare": false, 00:20:44.134 "compare_and_write": false, 00:20:44.134 "copy": true, 00:20:44.134 "flush": true, 00:20:44.134 "get_zone_info": false, 00:20:44.134 "nvme_admin": false, 00:20:44.134 "nvme_io": false, 00:20:44.134 "nvme_io_md": false, 00:20:44.134 "nvme_iov_md": false, 00:20:44.134 "read": true, 00:20:44.134 "reset": true, 00:20:44.134 "seek_data": false, 00:20:44.134 "seek_hole": false, 00:20:44.134 "unmap": true, 00:20:44.134 "write": true, 00:20:44.134 "write_zeroes": true, 00:20:44.134 "zcopy": true, 00:20:44.134 "zone_append": false, 00:20:44.134 "zone_management": false 00:20:44.134 }, 00:20:44.134 "uuid": "99f45b9d-b416-52cb-b818-14c983aa7d24", 00:20:44.134 "zoned": false 00:20:44.134 } 00:20:44.134 ]' 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:20:44.134 ************************************ 00:20:44.134 END TEST rpc_daemon_integrity 00:20:44.134 ************************************ 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:44.134 00:20:44.134 real 0m0.280s 00:20:44.134 user 0m0.166s 00:20:44.134 sys 0m0.048s 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.134 11:04:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:44.134 11:04:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:44.134 11:04:08 rpc -- rpc/rpc.sh@84 -- # killprocess 58711 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@954 -- # '[' -z 58711 ']' 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@958 -- # kill -0 58711 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@959 -- # uname 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58711 00:20:44.134 killing process with pid 58711 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58711' 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@973 -- # kill 58711 00:20:44.134 11:04:08 rpc -- common/autotest_common.sh@978 -- # wait 58711 00:20:44.392 ************************************ 00:20:44.392 END TEST rpc 00:20:44.392 ************************************ 00:20:44.392 00:20:44.392 real 0m2.500s 00:20:44.392 user 0m3.124s 00:20:44.392 sys 0m0.785s 00:20:44.392 11:04:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.392 11:04:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:44.651 11:04:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:20:44.651 11:04:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:44.651 11:04:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.651 11:04:09 -- common/autotest_common.sh@10 -- # set +x 00:20:44.651 ************************************ 00:20:44.651 START TEST skip_rpc 00:20:44.651 ************************************ 00:20:44.651 11:04:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:20:44.651 * Looking for test storage... 00:20:44.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:20:44.651 11:04:09 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:44.651 11:04:09 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:20:44.651 11:04:09 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:44.651 11:04:09 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@345 -- # : 1 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.651 11:04:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.910 11:04:09 skip_rpc -- scripts/common.sh@368 -- # return 0 00:20:44.910 11:04:09 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.910 11:04:09 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.910 --rc genhtml_branch_coverage=1 00:20:44.910 --rc genhtml_function_coverage=1 00:20:44.910 --rc genhtml_legend=1 00:20:44.910 --rc geninfo_all_blocks=1 00:20:44.910 --rc geninfo_unexecuted_blocks=1 00:20:44.910 00:20:44.910 ' 00:20:44.911 11:04:09 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:44.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.911 --rc genhtml_branch_coverage=1 00:20:44.911 --rc genhtml_function_coverage=1 00:20:44.911 --rc genhtml_legend=1 00:20:44.911 --rc geninfo_all_blocks=1 00:20:44.911 --rc geninfo_unexecuted_blocks=1 00:20:44.911 00:20:44.911 ' 00:20:44.911 11:04:09 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:44.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.911 --rc genhtml_branch_coverage=1 00:20:44.911 --rc genhtml_function_coverage=1 00:20:44.911 --rc genhtml_legend=1 00:20:44.911 --rc geninfo_all_blocks=1 00:20:44.911 --rc geninfo_unexecuted_blocks=1 00:20:44.911 00:20:44.911 ' 00:20:44.911 11:04:09 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:44.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.911 --rc genhtml_branch_coverage=1 00:20:44.911 --rc genhtml_function_coverage=1 00:20:44.911 --rc genhtml_legend=1 00:20:44.911 --rc geninfo_all_blocks=1 00:20:44.911 --rc geninfo_unexecuted_blocks=1 00:20:44.911 00:20:44.911 ' 00:20:44.911 11:04:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:44.911 11:04:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:20:44.911 11:04:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:20:44.911 11:04:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:44.911 11:04:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.911 11:04:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:44.911 ************************************ 00:20:44.911 START TEST skip_rpc 00:20:44.911 ************************************ 00:20:44.911 11:04:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:20:44.911 11:04:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58967 00:20:44.911 11:04:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:20:44.911 11:04:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:44.911 11:04:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:20:44.911 [2024-12-05 11:04:09.398024] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:44.911 [2024-12-05 11:04:09.398105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:20:44.911 [2024-12-05 11:04:09.547388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.171 [2024-12-05 11:04:09.610482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.449 2024/12/05 11:04:14 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58967 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58967 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58967 00:20:50.449 killing process with pid 58967 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58967' 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58967 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58967 00:20:50.449 00:20:50.449 real 0m5.396s 00:20:50.449 user 0m5.043s 00:20:50.449 sys 0m0.285s 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.449 ************************************ 00:20:50.449 END TEST skip_rpc 00:20:50.449 ************************************ 00:20:50.449 11:04:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.449 11:04:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:20:50.449 11:04:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.449 11:04:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.449 11:04:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.449 ************************************ 00:20:50.449 START TEST skip_rpc_with_json 00:20:50.449 ************************************ 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:20:50.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59059 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59059 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59059 ']' 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.449 11:04:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:50.449 [2024-12-05 11:04:14.855318] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:50.449 [2024-12-05 11:04:14.855717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59059 ] 00:20:50.449 [2024-12-05 11:04:15.008202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.449 [2024-12-05 11:04:15.063952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:51.453 [2024-12-05 11:04:15.858010] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:20:51.453 2024/12/05 11:04:15 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:20:51.453 request: 00:20:51.453 { 00:20:51.453 "method": "nvmf_get_transports", 00:20:51.453 "params": { 00:20:51.453 "trtype": "tcp" 00:20:51.453 } 00:20:51.453 } 00:20:51.453 Got JSON-RPC error response 00:20:51.453 GoRPCClient: error on JSON-RPC call 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:51.453 [2024-12-05 11:04:15.870087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.453 11:04:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:51.453 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.453 11:04:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:51.453 { 00:20:51.453 "subsystems": [ 00:20:51.453 { 00:20:51.453 "subsystem": "fsdev", 00:20:51.453 "config": [ 00:20:51.453 { 00:20:51.453 "method": "fsdev_set_opts", 00:20:51.453 "params": { 00:20:51.453 "fsdev_io_cache_size": 256, 00:20:51.453 "fsdev_io_pool_size": 65535 00:20:51.453 } 00:20:51.453 } 00:20:51.453 ] 00:20:51.453 }, 00:20:51.453 { 00:20:51.453 "subsystem": "keyring", 00:20:51.453 "config": [] 00:20:51.453 }, 00:20:51.453 { 00:20:51.453 "subsystem": "iobuf", 00:20:51.453 "config": [ 00:20:51.453 { 00:20:51.453 "method": "iobuf_set_options", 00:20:51.453 "params": { 00:20:51.453 "enable_numa": false, 00:20:51.453 "large_bufsize": 135168, 00:20:51.453 "large_pool_count": 1024, 00:20:51.453 "small_bufsize": 8192, 00:20:51.453 "small_pool_count": 8192 00:20:51.453 } 00:20:51.453 } 00:20:51.453 ] 00:20:51.453 }, 00:20:51.453 { 00:20:51.453 "subsystem": "sock", 00:20:51.453 "config": [ 00:20:51.453 { 00:20:51.453 "method": "sock_set_default_impl", 00:20:51.453 "params": { 00:20:51.453 "impl_name": "posix" 00:20:51.453 } 00:20:51.453 }, 00:20:51.453 { 00:20:51.453 "method": "sock_impl_set_options", 00:20:51.453 "params": { 00:20:51.453 "enable_ktls": false, 00:20:51.453 "enable_placement_id": 0, 00:20:51.453 "enable_quickack": false, 00:20:51.453 "enable_recv_pipe": true, 00:20:51.453 "enable_zerocopy_send_client": false, 00:20:51.453 "enable_zerocopy_send_server": true, 00:20:51.453 "impl_name": "ssl", 00:20:51.453 "recv_buf_size": 4096, 00:20:51.453 "send_buf_size": 4096, 00:20:51.453 "tls_version": 0, 00:20:51.453 "zerocopy_threshold": 0 00:20:51.453 } 00:20:51.453 }, 00:20:51.453 { 00:20:51.453 "method": "sock_impl_set_options", 00:20:51.453 "params": { 00:20:51.453 "enable_ktls": false, 00:20:51.453 "enable_placement_id": 0, 00:20:51.453 "enable_quickack": false, 00:20:51.453 "enable_recv_pipe": true, 00:20:51.453 "enable_zerocopy_send_client": false, 00:20:51.453 "enable_zerocopy_send_server": true, 00:20:51.453 "impl_name": "posix", 00:20:51.453 "recv_buf_size": 2097152, 00:20:51.453 "send_buf_size": 2097152, 00:20:51.453 "tls_version": 0, 00:20:51.453 "zerocopy_threshold": 0 00:20:51.453 } 00:20:51.453 } 00:20:51.453 ] 00:20:51.453 }, 00:20:51.453 { 00:20:51.454 "subsystem": "vmd", 00:20:51.454 "config": [] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "accel", 00:20:51.454 "config": [ 00:20:51.454 { 00:20:51.454 "method": "accel_set_options", 00:20:51.454 "params": { 00:20:51.454 "buf_count": 2048, 00:20:51.454 "large_cache_size": 16, 00:20:51.454 "sequence_count": 2048, 00:20:51.454 "small_cache_size": 128, 00:20:51.454 "task_count": 2048 00:20:51.454 } 00:20:51.454 } 00:20:51.454 ] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "bdev", 00:20:51.454 "config": [ 00:20:51.454 { 00:20:51.454 "method": "bdev_set_options", 00:20:51.454 "params": { 00:20:51.454 "bdev_auto_examine": true, 00:20:51.454 "bdev_io_cache_size": 256, 00:20:51.454 "bdev_io_pool_size": 65535, 00:20:51.454 "iobuf_large_cache_size": 16, 00:20:51.454 "iobuf_small_cache_size": 128 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "bdev_raid_set_options", 00:20:51.454 "params": { 00:20:51.454 "process_max_bandwidth_mb_sec": 0, 00:20:51.454 "process_window_size_kb": 1024 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "bdev_iscsi_set_options", 00:20:51.454 "params": { 00:20:51.454 "timeout_sec": 30 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "bdev_nvme_set_options", 00:20:51.454 "params": { 00:20:51.454 "action_on_timeout": "none", 00:20:51.454 "allow_accel_sequence": false, 00:20:51.454 "arbitration_burst": 0, 00:20:51.454 "bdev_retry_count": 3, 00:20:51.454 "ctrlr_loss_timeout_sec": 0, 00:20:51.454 "delay_cmd_submit": true, 00:20:51.454 "dhchap_dhgroups": [ 00:20:51.454 "null", 00:20:51.454 "ffdhe2048", 00:20:51.454 "ffdhe3072", 00:20:51.454 "ffdhe4096", 00:20:51.454 "ffdhe6144", 00:20:51.454 "ffdhe8192" 00:20:51.454 ], 00:20:51.454 "dhchap_digests": [ 00:20:51.454 "sha256", 00:20:51.454 "sha384", 00:20:51.454 "sha512" 00:20:51.454 ], 00:20:51.454 "disable_auto_failback": false, 00:20:51.454 "fast_io_fail_timeout_sec": 0, 00:20:51.454 "generate_uuids": false, 00:20:51.454 "high_priority_weight": 0, 00:20:51.454 "io_path_stat": false, 00:20:51.454 "io_queue_requests": 0, 00:20:51.454 "keep_alive_timeout_ms": 10000, 00:20:51.454 "low_priority_weight": 0, 00:20:51.454 "medium_priority_weight": 0, 00:20:51.454 "nvme_adminq_poll_period_us": 10000, 00:20:51.454 "nvme_error_stat": false, 00:20:51.454 "nvme_ioq_poll_period_us": 0, 00:20:51.454 "rdma_cm_event_timeout_ms": 0, 00:20:51.454 "rdma_max_cq_size": 0, 00:20:51.454 "rdma_srq_size": 0, 00:20:51.454 "reconnect_delay_sec": 0, 00:20:51.454 "timeout_admin_us": 0, 00:20:51.454 "timeout_us": 0, 00:20:51.454 "transport_ack_timeout": 0, 00:20:51.454 "transport_retry_count": 4, 00:20:51.454 "transport_tos": 0 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "bdev_nvme_set_hotplug", 00:20:51.454 "params": { 00:20:51.454 "enable": false, 00:20:51.454 "period_us": 100000 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "bdev_wait_for_examine" 00:20:51.454 } 00:20:51.454 ] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "scsi", 00:20:51.454 "config": null 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "scheduler", 00:20:51.454 "config": [ 00:20:51.454 { 00:20:51.454 "method": "framework_set_scheduler", 00:20:51.454 "params": { 00:20:51.454 "name": "static" 00:20:51.454 } 00:20:51.454 } 00:20:51.454 ] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "vhost_scsi", 00:20:51.454 "config": [] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "vhost_blk", 00:20:51.454 "config": [] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "ublk", 00:20:51.454 "config": [] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "nbd", 00:20:51.454 "config": [] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "nvmf", 00:20:51.454 "config": [ 00:20:51.454 { 00:20:51.454 "method": "nvmf_set_config", 00:20:51.454 "params": { 00:20:51.454 "admin_cmd_passthru": { 00:20:51.454 "identify_ctrlr": false 00:20:51.454 }, 00:20:51.454 "dhchap_dhgroups": [ 00:20:51.454 "null", 00:20:51.454 "ffdhe2048", 00:20:51.454 "ffdhe3072", 00:20:51.454 "ffdhe4096", 00:20:51.454 "ffdhe6144", 00:20:51.454 "ffdhe8192" 00:20:51.454 ], 00:20:51.454 "dhchap_digests": [ 00:20:51.454 "sha256", 00:20:51.454 "sha384", 00:20:51.454 "sha512" 00:20:51.454 ], 00:20:51.454 "discovery_filter": "match_any" 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "nvmf_set_max_subsystems", 00:20:51.454 "params": { 00:20:51.454 "max_subsystems": 1024 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "nvmf_set_crdt", 00:20:51.454 "params": { 00:20:51.454 "crdt1": 0, 00:20:51.454 "crdt2": 0, 00:20:51.454 "crdt3": 0 00:20:51.454 } 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "method": "nvmf_create_transport", 00:20:51.454 "params": { 00:20:51.454 "abort_timeout_sec": 1, 00:20:51.454 "ack_timeout": 0, 00:20:51.454 "buf_cache_size": 4294967295, 00:20:51.454 "c2h_success": true, 00:20:51.454 "data_wr_pool_size": 0, 00:20:51.454 "dif_insert_or_strip": false, 00:20:51.454 "in_capsule_data_size": 4096, 00:20:51.454 "io_unit_size": 131072, 00:20:51.454 "max_aq_depth": 128, 00:20:51.454 "max_io_qpairs_per_ctrlr": 127, 00:20:51.454 "max_io_size": 131072, 00:20:51.454 "max_queue_depth": 128, 00:20:51.454 "num_shared_buffers": 511, 00:20:51.454 "sock_priority": 0, 00:20:51.454 "trtype": "TCP", 00:20:51.454 "zcopy": false 00:20:51.454 } 00:20:51.454 } 00:20:51.454 ] 00:20:51.454 }, 00:20:51.454 { 00:20:51.454 "subsystem": "iscsi", 00:20:51.454 "config": [ 00:20:51.454 { 00:20:51.454 "method": "iscsi_set_options", 00:20:51.454 "params": { 00:20:51.454 "allow_duplicated_isid": false, 00:20:51.454 "chap_group": 0, 00:20:51.454 "data_out_pool_size": 2048, 00:20:51.454 "default_time2retain": 20, 00:20:51.454 "default_time2wait": 2, 00:20:51.454 "disable_chap": false, 00:20:51.454 "error_recovery_level": 0, 00:20:51.454 "first_burst_length": 8192, 00:20:51.454 "immediate_data": true, 00:20:51.454 "immediate_data_pool_size": 16384, 00:20:51.454 "max_connections_per_session": 2, 00:20:51.454 "max_large_datain_per_connection": 64, 00:20:51.454 "max_queue_depth": 64, 00:20:51.454 "max_r2t_per_connection": 4, 00:20:51.454 "max_sessions": 128, 00:20:51.454 "mutual_chap": false, 00:20:51.454 "node_base": "iqn.2016-06.io.spdk", 00:20:51.454 "nop_in_interval": 30, 00:20:51.454 "nop_timeout": 60, 00:20:51.454 "pdu_pool_size": 36864, 00:20:51.454 "require_chap": false 00:20:51.454 } 00:20:51.455 } 00:20:51.455 ] 00:20:51.455 } 00:20:51.455 ] 00:20:51.455 } 00:20:51.455 11:04:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:51.455 11:04:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59059 00:20:51.455 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59059 ']' 00:20:51.455 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59059 00:20:51.455 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:20:51.455 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.455 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59059 00:20:51.712 killing process with pid 59059 00:20:51.712 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.712 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.712 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59059' 00:20:51.712 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59059 00:20:51.712 11:04:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59059 00:20:51.970 11:04:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59099 00:20:51.970 11:04:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:51.970 11:04:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59099 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59099 ']' 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59099 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59099 00:20:57.237 killing process with pid 59099 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59099' 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59099 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59099 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:20:57.237 00:20:57.237 real 0m7.021s 00:20:57.237 user 0m6.849s 00:20:57.237 sys 0m0.645s 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:20:57.237 ************************************ 00:20:57.237 END TEST skip_rpc_with_json 00:20:57.237 ************************************ 00:20:57.237 11:04:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:20:57.237 11:04:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.237 11:04:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.237 11:04:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.237 ************************************ 00:20:57.237 START TEST skip_rpc_with_delay 00:20:57.237 ************************************ 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:20:57.237 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:20:57.495 [2024-12-05 11:04:21.926048] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:20:57.495 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:20:57.495 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:57.495 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:57.495 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:57.495 00:20:57.495 real 0m0.073s 00:20:57.495 user 0m0.040s 00:20:57.495 sys 0m0.032s 00:20:57.495 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.495 ************************************ 00:20:57.495 END TEST skip_rpc_with_delay 00:20:57.495 ************************************ 00:20:57.495 11:04:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:20:57.495 11:04:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:20:57.495 11:04:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:20:57.495 11:04:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:20:57.495 11:04:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:57.495 11:04:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.495 11:04:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:57.495 ************************************ 00:20:57.495 START TEST exit_on_failed_rpc_init 00:20:57.495 ************************************ 00:20:57.495 11:04:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:20:57.495 11:04:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59208 00:20:57.495 11:04:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:57.495 11:04:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59208 00:20:57.495 11:04:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59208 ']' 00:20:57.495 11:04:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.495 11:04:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.495 11:04:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.495 11:04:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.495 11:04:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:20:57.495 [2024-12-05 11:04:22.067266] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:57.495 [2024-12-05 11:04:22.067393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59208 ] 00:20:57.753 [2024-12-05 11:04:22.213610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.753 [2024-12-05 11:04:22.265997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:20:58.685 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:20:58.685 [2024-12-05 11:04:23.127047] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:58.685 [2024-12-05 11:04:23.127179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59238 ] 00:20:58.685 [2024-12-05 11:04:23.284759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.944 [2024-12-05 11:04:23.348906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.944 [2024-12-05 11:04:23.349003] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:58.944 [2024-12-05 11:04:23.349021] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:58.944 [2024-12-05 11:04:23.349034] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59208 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59208 ']' 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59208 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59208 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.944 killing process with pid 59208 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59208' 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59208 00:20:58.944 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59208 00:20:59.203 00:20:59.203 real 0m1.789s 00:20:59.203 user 0m2.092s 00:20:59.203 sys 0m0.429s 00:20:59.203 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.203 ************************************ 00:20:59.203 END TEST exit_on_failed_rpc_init 00:20:59.203 ************************************ 00:20:59.203 11:04:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:20:59.203 11:04:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:59.203 00:20:59.203 real 0m14.745s 00:20:59.203 user 0m14.235s 00:20:59.203 sys 0m1.654s 00:20:59.203 11:04:23 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.203 ************************************ 00:20:59.203 END TEST skip_rpc 00:20:59.203 ************************************ 00:20:59.203 11:04:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.461 11:04:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:20:59.461 11:04:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:59.461 11:04:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.461 11:04:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.461 ************************************ 00:20:59.461 START TEST rpc_client 00:20:59.461 ************************************ 00:20:59.461 11:04:23 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:20:59.461 * Looking for test storage... 00:20:59.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:20:59.461 11:04:23 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.461 11:04:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.461 --rc genhtml_branch_coverage=1 00:20:59.461 --rc genhtml_function_coverage=1 00:20:59.461 --rc genhtml_legend=1 00:20:59.461 --rc geninfo_all_blocks=1 00:20:59.461 --rc geninfo_unexecuted_blocks=1 00:20:59.461 00:20:59.461 ' 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.461 --rc genhtml_branch_coverage=1 00:20:59.461 --rc genhtml_function_coverage=1 00:20:59.461 --rc genhtml_legend=1 00:20:59.461 --rc geninfo_all_blocks=1 00:20:59.461 --rc geninfo_unexecuted_blocks=1 00:20:59.461 00:20:59.461 ' 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.461 --rc genhtml_branch_coverage=1 00:20:59.461 --rc genhtml_function_coverage=1 00:20:59.461 --rc genhtml_legend=1 00:20:59.461 --rc geninfo_all_blocks=1 00:20:59.461 --rc geninfo_unexecuted_blocks=1 00:20:59.461 00:20:59.461 ' 00:20:59.461 11:04:24 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.461 --rc genhtml_branch_coverage=1 00:20:59.461 --rc genhtml_function_coverage=1 00:20:59.461 --rc genhtml_legend=1 00:20:59.462 --rc geninfo_all_blocks=1 00:20:59.462 --rc geninfo_unexecuted_blocks=1 00:20:59.462 00:20:59.462 ' 00:20:59.462 11:04:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:20:59.720 OK 00:20:59.720 11:04:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:20:59.720 00:20:59.720 real 0m0.226s 00:20:59.720 user 0m0.137s 00:20:59.720 sys 0m0.106s 00:20:59.720 11:04:24 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.720 ************************************ 00:20:59.720 11:04:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:20:59.720 END TEST rpc_client 00:20:59.720 ************************************ 00:20:59.720 11:04:24 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:20:59.720 11:04:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:59.720 11:04:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.720 11:04:24 -- common/autotest_common.sh@10 -- # set +x 00:20:59.720 ************************************ 00:20:59.720 START TEST json_config 00:20:59.720 ************************************ 00:20:59.720 11:04:24 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:20:59.720 11:04:24 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:59.720 11:04:24 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:59.720 11:04:24 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:20:59.720 11:04:24 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:59.720 11:04:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.720 11:04:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.720 11:04:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.720 11:04:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.721 11:04:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.721 11:04:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.721 11:04:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.721 11:04:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.721 11:04:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.721 11:04:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.721 11:04:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.721 11:04:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:20:59.721 11:04:24 json_config -- scripts/common.sh@345 -- # : 1 00:20:59.721 11:04:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.721 11:04:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.721 11:04:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:20:59.721 11:04:24 json_config -- scripts/common.sh@353 -- # local d=1 00:20:59.721 11:04:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.721 11:04:24 json_config -- scripts/common.sh@355 -- # echo 1 00:20:59.721 11:04:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.721 11:04:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:20:59.721 11:04:24 json_config -- scripts/common.sh@353 -- # local d=2 00:20:59.721 11:04:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.721 11:04:24 json_config -- scripts/common.sh@355 -- # echo 2 00:20:59.721 11:04:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.721 11:04:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.721 11:04:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.721 11:04:24 json_config -- scripts/common.sh@368 -- # return 0 00:20:59.721 11:04:24 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.721 11:04:24 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:59.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.721 --rc genhtml_branch_coverage=1 00:20:59.721 --rc genhtml_function_coverage=1 00:20:59.721 --rc genhtml_legend=1 00:20:59.721 --rc geninfo_all_blocks=1 00:20:59.721 --rc geninfo_unexecuted_blocks=1 00:20:59.721 00:20:59.721 ' 00:20:59.721 11:04:24 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:59.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.721 --rc genhtml_branch_coverage=1 00:20:59.721 --rc genhtml_function_coverage=1 00:20:59.721 --rc genhtml_legend=1 00:20:59.721 --rc geninfo_all_blocks=1 00:20:59.721 --rc geninfo_unexecuted_blocks=1 00:20:59.721 00:20:59.721 ' 00:20:59.721 11:04:24 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:59.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.721 --rc genhtml_branch_coverage=1 00:20:59.721 --rc genhtml_function_coverage=1 00:20:59.721 --rc genhtml_legend=1 00:20:59.721 --rc geninfo_all_blocks=1 00:20:59.721 --rc geninfo_unexecuted_blocks=1 00:20:59.721 00:20:59.721 ' 00:20:59.721 11:04:24 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:59.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.721 --rc genhtml_branch_coverage=1 00:20:59.721 --rc genhtml_function_coverage=1 00:20:59.721 --rc genhtml_legend=1 00:20:59.721 --rc geninfo_all_blocks=1 00:20:59.721 --rc geninfo_unexecuted_blocks=1 00:20:59.721 00:20:59.721 ' 00:20:59.721 11:04:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.721 11:04:24 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:59.980 11:04:24 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:20:59.980 11:04:24 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:20:59.980 11:04:24 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.980 11:04:24 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:59.980 11:04:24 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:20:59.980 11:04:24 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.980 11:04:24 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:59.980 11:04:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.980 11:04:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.980 11:04:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.980 11:04:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.980 11:04:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.980 11:04:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.980 11:04:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.980 11:04:24 json_config -- paths/export.sh@5 -- # export PATH 00:20:59.981 11:04:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:59.981 11:04:24 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:59.981 11:04:24 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:59.981 11:04:24 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@50 -- # : 0 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:59.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:59.981 11:04:24 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:20:59.981 INFO: JSON configuration test init 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:59.981 11:04:24 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:20:59.981 11:04:24 json_config -- json_config/common.sh@9 -- # local app=target 00:20:59.981 11:04:24 json_config -- json_config/common.sh@10 -- # shift 00:20:59.981 11:04:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:20:59.981 11:04:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:20:59.981 11:04:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:20:59.981 11:04:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:59.981 11:04:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:20:59.981 11:04:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59372 00:20:59.981 Waiting for target to run... 00:20:59.981 11:04:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:20:59.981 11:04:24 json_config -- json_config/common.sh@25 -- # waitforlisten 59372 /var/tmp/spdk_tgt.sock 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 59372 ']' 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.981 11:04:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:20:59.981 11:04:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:20:59.981 [2024-12-05 11:04:24.472105] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:59.981 [2024-12-05 11:04:24.472226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:21:00.239 [2024-12-05 11:04:24.858752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.496 [2024-12-05 11:04:24.910985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.754 11:04:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.754 11:04:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:21:00.754 00:21:00.754 11:04:25 json_config -- json_config/common.sh@26 -- # echo '' 00:21:00.754 11:04:25 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:21:00.754 11:04:25 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:21:00.754 11:04:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.754 11:04:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:00.754 11:04:25 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:21:00.754 11:04:25 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:21:00.754 11:04:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.754 11:04:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 11:04:25 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:21:01.012 11:04:25 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:21:01.012 11:04:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:21:01.270 11:04:25 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:21:01.270 11:04:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:21:01.270 11:04:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.270 11:04:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:01.527 11:04:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:21:01.527 11:04:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:21:01.527 11:04:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:21:01.527 11:04:25 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:21:01.527 11:04:25 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:21:01.527 11:04:25 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:21:01.527 11:04:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:21:01.527 11:04:25 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@51 -- # local get_types 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@54 -- # sort 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:21:01.786 11:04:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.786 11:04:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@62 -- # return 0 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:21:01.786 11:04:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.786 11:04:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:21:01.786 11:04:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:21:01.786 11:04:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:21:02.044 MallocForNvmf0 00:21:02.044 11:04:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:21:02.044 11:04:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:21:02.303 MallocForNvmf1 00:21:02.303 11:04:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:21:02.303 11:04:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:21:02.562 [2024-12-05 11:04:27.058230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.562 11:04:27 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.562 11:04:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.820 11:04:27 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:21:02.820 11:04:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:21:03.078 11:04:27 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:21:03.078 11:04:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:21:03.336 11:04:27 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:21:03.336 11:04:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:21:03.336 [2024-12-05 11:04:27.966726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:03.336 11:04:27 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:21:03.595 11:04:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.595 11:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:03.595 11:04:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:21:03.595 11:04:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.595 11:04:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:03.595 11:04:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:21:03.595 11:04:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:21:03.595 11:04:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:21:03.853 MallocBdevForConfigChangeCheck 00:21:03.854 11:04:28 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:21:03.854 11:04:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.854 11:04:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:03.854 11:04:28 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:21:03.854 11:04:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:04.421 INFO: shutting down applications... 00:21:04.421 11:04:28 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:21:04.421 11:04:28 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:21:04.421 11:04:28 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:21:04.421 11:04:28 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:21:04.421 11:04:28 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:21:04.679 Calling clear_iscsi_subsystem 00:21:04.679 Calling clear_nvmf_subsystem 00:21:04.679 Calling clear_nbd_subsystem 00:21:04.679 Calling clear_ublk_subsystem 00:21:04.679 Calling clear_vhost_blk_subsystem 00:21:04.679 Calling clear_vhost_scsi_subsystem 00:21:04.679 Calling clear_bdev_subsystem 00:21:04.679 11:04:29 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:21:04.679 11:04:29 json_config -- json_config/json_config.sh@350 -- # count=100 00:21:04.679 11:04:29 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:21:04.679 11:04:29 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:04.679 11:04:29 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:21:04.679 11:04:29 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:21:05.025 11:04:29 json_config -- json_config/json_config.sh@352 -- # break 00:21:05.025 11:04:29 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:21:05.025 11:04:29 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:21:05.025 11:04:29 json_config -- json_config/common.sh@31 -- # local app=target 00:21:05.025 11:04:29 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:21:05.025 11:04:29 json_config -- json_config/common.sh@35 -- # [[ -n 59372 ]] 00:21:05.025 11:04:29 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59372 00:21:05.025 11:04:29 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:21:05.025 11:04:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:05.025 11:04:29 json_config -- json_config/common.sh@41 -- # kill -0 59372 00:21:05.025 11:04:29 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:21:05.592 11:04:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:21:05.592 11:04:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:05.592 11:04:30 json_config -- json_config/common.sh@41 -- # kill -0 59372 00:21:05.592 11:04:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:21:05.592 11:04:30 json_config -- json_config/common.sh@43 -- # break 00:21:05.592 11:04:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:21:05.592 SPDK target shutdown done 00:21:05.592 11:04:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:21:05.592 INFO: relaunching applications... 00:21:05.592 11:04:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:21:05.592 11:04:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:05.592 11:04:30 json_config -- json_config/common.sh@9 -- # local app=target 00:21:05.592 11:04:30 json_config -- json_config/common.sh@10 -- # shift 00:21:05.592 11:04:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:21:05.592 11:04:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:21:05.592 11:04:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:21:05.592 11:04:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:05.592 11:04:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:05.592 11:04:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59652 00:21:05.592 11:04:30 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:05.592 Waiting for target to run... 00:21:05.592 11:04:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:21:05.592 11:04:30 json_config -- json_config/common.sh@25 -- # waitforlisten 59652 /var/tmp/spdk_tgt.sock 00:21:05.592 11:04:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 59652 ']' 00:21:05.592 11:04:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:21:05.592 11:04:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:21:05.592 11:04:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:21:05.592 11:04:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.592 11:04:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:05.592 [2024-12-05 11:04:30.110289] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:05.592 [2024-12-05 11:04:30.110409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:21:05.851 [2024-12-05 11:04:30.499615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.109 [2024-12-05 11:04:30.542902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.368 [2024-12-05 11:04:30.882639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.368 [2024-12-05 11:04:30.914746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:06.626 00:21:06.626 INFO: Checking if target configuration is the same... 00:21:06.626 11:04:31 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.626 11:04:31 json_config -- common/autotest_common.sh@868 -- # return 0 00:21:06.626 11:04:31 json_config -- json_config/common.sh@26 -- # echo '' 00:21:06.626 11:04:31 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:21:06.626 11:04:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:21:06.626 11:04:31 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:06.626 11:04:31 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:21:06.626 11:04:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:06.626 + '[' 2 -ne 2 ']' 00:21:06.626 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:21:06.626 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:21:06.626 + rootdir=/home/vagrant/spdk_repo/spdk 00:21:06.626 +++ basename /dev/fd/62 00:21:06.626 ++ mktemp /tmp/62.XXX 00:21:06.626 + tmp_file_1=/tmp/62.xBY 00:21:06.626 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:06.626 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:21:06.626 + tmp_file_2=/tmp/spdk_tgt_config.json.A2E 00:21:06.626 + ret=0 00:21:06.626 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:06.884 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:07.142 + diff -u /tmp/62.xBY /tmp/spdk_tgt_config.json.A2E 00:21:07.142 INFO: JSON config files are the same 00:21:07.142 + echo 'INFO: JSON config files are the same' 00:21:07.142 + rm /tmp/62.xBY /tmp/spdk_tgt_config.json.A2E 00:21:07.142 + exit 0 00:21:07.142 INFO: changing configuration and checking if this can be detected... 00:21:07.142 11:04:31 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:21:07.142 11:04:31 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:21:07.142 11:04:31 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:21:07.142 11:04:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:21:07.400 11:04:31 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:07.400 11:04:31 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:21:07.400 11:04:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:21:07.400 + '[' 2 -ne 2 ']' 00:21:07.400 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:21:07.400 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:21:07.400 + rootdir=/home/vagrant/spdk_repo/spdk 00:21:07.400 +++ basename /dev/fd/62 00:21:07.400 ++ mktemp /tmp/62.XXX 00:21:07.400 + tmp_file_1=/tmp/62.yjh 00:21:07.400 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:07.400 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:21:07.400 + tmp_file_2=/tmp/spdk_tgt_config.json.OWd 00:21:07.400 + ret=0 00:21:07.400 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:07.658 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:21:07.917 + diff -u /tmp/62.yjh /tmp/spdk_tgt_config.json.OWd 00:21:07.917 + ret=1 00:21:07.917 + echo '=== Start of file: /tmp/62.yjh ===' 00:21:07.917 + cat /tmp/62.yjh 00:21:07.917 + echo '=== End of file: /tmp/62.yjh ===' 00:21:07.917 + echo '' 00:21:07.917 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OWd ===' 00:21:07.917 + cat /tmp/spdk_tgt_config.json.OWd 00:21:07.917 + echo '=== End of file: /tmp/spdk_tgt_config.json.OWd ===' 00:21:07.917 + echo '' 00:21:07.917 + rm /tmp/62.yjh /tmp/spdk_tgt_config.json.OWd 00:21:07.917 + exit 1 00:21:07.917 INFO: configuration change detected. 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@324 -- # [[ -n 59652 ]] 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@200 -- # uname -s 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:07.917 11:04:32 json_config -- json_config/json_config.sh@330 -- # killprocess 59652 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@954 -- # '[' -z 59652 ']' 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@958 -- # kill -0 59652 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@959 -- # uname 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59652 00:21:07.917 killing process with pid 59652 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59652' 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@973 -- # kill 59652 00:21:07.917 11:04:32 json_config -- common/autotest_common.sh@978 -- # wait 59652 00:21:08.176 11:04:32 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:21:08.176 11:04:32 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:21:08.176 11:04:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.176 11:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:08.176 11:04:32 json_config -- json_config/json_config.sh@335 -- # return 0 00:21:08.176 INFO: Success 00:21:08.176 11:04:32 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:21:08.176 ************************************ 00:21:08.176 END TEST json_config 00:21:08.176 ************************************ 00:21:08.176 00:21:08.176 real 0m8.556s 00:21:08.176 user 0m12.079s 00:21:08.176 sys 0m2.023s 00:21:08.176 11:04:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.176 11:04:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:08.176 11:04:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:21:08.176 11:04:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:08.176 11:04:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.176 11:04:32 -- common/autotest_common.sh@10 -- # set +x 00:21:08.176 ************************************ 00:21:08.176 START TEST json_config_extra_key 00:21:08.176 ************************************ 00:21:08.176 11:04:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:08.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.435 --rc genhtml_branch_coverage=1 00:21:08.435 --rc genhtml_function_coverage=1 00:21:08.435 --rc genhtml_legend=1 00:21:08.435 --rc geninfo_all_blocks=1 00:21:08.435 --rc geninfo_unexecuted_blocks=1 00:21:08.435 00:21:08.435 ' 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:08.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.435 --rc genhtml_branch_coverage=1 00:21:08.435 --rc genhtml_function_coverage=1 00:21:08.435 --rc genhtml_legend=1 00:21:08.435 --rc geninfo_all_blocks=1 00:21:08.435 --rc geninfo_unexecuted_blocks=1 00:21:08.435 00:21:08.435 ' 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:08.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.435 --rc genhtml_branch_coverage=1 00:21:08.435 --rc genhtml_function_coverage=1 00:21:08.435 --rc genhtml_legend=1 00:21:08.435 --rc geninfo_all_blocks=1 00:21:08.435 --rc geninfo_unexecuted_blocks=1 00:21:08.435 00:21:08.435 ' 00:21:08.435 11:04:32 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:08.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:08.435 --rc genhtml_branch_coverage=1 00:21:08.435 --rc genhtml_function_coverage=1 00:21:08.435 --rc genhtml_legend=1 00:21:08.435 --rc geninfo_all_blocks=1 00:21:08.435 --rc geninfo_unexecuted_blocks=1 00:21:08.435 00:21:08.435 ' 00:21:08.435 11:04:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.435 11:04:32 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.435 11:04:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:21:08.435 11:04:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.435 11:04:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.435 11:04:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.436 11:04:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.436 11:04:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.436 11:04:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.436 11:04:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:21:08.436 11:04:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:08.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:08.436 11:04:33 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:21:08.436 INFO: launching applications... 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:21:08.436 11:04:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59838 00:21:08.436 Waiting for target to run... 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59838 /var/tmp/spdk_tgt.sock 00:21:08.436 11:04:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59838 ']' 00:21:08.436 11:04:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:21:08.436 11:04:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:21:08.436 11:04:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:21:08.436 11:04:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.436 11:04:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:21:08.436 11:04:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:21:08.436 [2024-12-05 11:04:33.085563] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:08.436 [2024-12-05 11:04:33.085695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59838 ] 00:21:09.002 [2024-12-05 11:04:33.472990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.002 [2024-12-05 11:04:33.517975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.571 11:04:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.571 11:04:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:21:09.571 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:21:09.571 INFO: shutting down applications... 00:21:09.571 11:04:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:21:09.571 11:04:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59838 ]] 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59838 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59838 00:21:09.571 11:04:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:10.139 11:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:10.139 11:04:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:10.139 11:04:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59838 00:21:10.139 11:04:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:21:10.139 11:04:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:21:10.139 11:04:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:21:10.139 SPDK target shutdown done 00:21:10.139 11:04:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:21:10.139 Success 00:21:10.139 11:04:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:21:10.139 00:21:10.139 real 0m1.739s 00:21:10.139 user 0m1.535s 00:21:10.139 sys 0m0.469s 00:21:10.139 11:04:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.139 ************************************ 00:21:10.139 END TEST json_config_extra_key 00:21:10.139 11:04:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:21:10.139 ************************************ 00:21:10.139 11:04:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:21:10.139 11:04:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:10.139 11:04:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.139 11:04:34 -- common/autotest_common.sh@10 -- # set +x 00:21:10.139 ************************************ 00:21:10.139 START TEST alias_rpc 00:21:10.139 ************************************ 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:21:10.140 * Looking for test storage... 00:21:10.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.140 11:04:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.140 --rc genhtml_branch_coverage=1 00:21:10.140 --rc genhtml_function_coverage=1 00:21:10.140 --rc genhtml_legend=1 00:21:10.140 --rc geninfo_all_blocks=1 00:21:10.140 --rc geninfo_unexecuted_blocks=1 00:21:10.140 00:21:10.140 ' 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.140 --rc genhtml_branch_coverage=1 00:21:10.140 --rc genhtml_function_coverage=1 00:21:10.140 --rc genhtml_legend=1 00:21:10.140 --rc geninfo_all_blocks=1 00:21:10.140 --rc geninfo_unexecuted_blocks=1 00:21:10.140 00:21:10.140 ' 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.140 --rc genhtml_branch_coverage=1 00:21:10.140 --rc genhtml_function_coverage=1 00:21:10.140 --rc genhtml_legend=1 00:21:10.140 --rc geninfo_all_blocks=1 00:21:10.140 --rc geninfo_unexecuted_blocks=1 00:21:10.140 00:21:10.140 ' 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.140 --rc genhtml_branch_coverage=1 00:21:10.140 --rc genhtml_function_coverage=1 00:21:10.140 --rc genhtml_legend=1 00:21:10.140 --rc geninfo_all_blocks=1 00:21:10.140 --rc geninfo_unexecuted_blocks=1 00:21:10.140 00:21:10.140 ' 00:21:10.140 11:04:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:10.140 11:04:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59928 00:21:10.140 11:04:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59928 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59928 ']' 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.140 11:04:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.140 11:04:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:10.399 [2024-12-05 11:04:34.836919] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:10.399 [2024-12-05 11:04:34.837342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59928 ] 00:21:10.399 [2024-12-05 11:04:34.988243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.657 [2024-12-05 11:04:35.053297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.223 11:04:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.223 11:04:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:11.223 11:04:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:21:11.482 11:04:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59928 00:21:11.482 11:04:36 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59928 ']' 00:21:11.482 11:04:36 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59928 00:21:11.482 11:04:36 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:21:11.482 11:04:36 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.482 11:04:36 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59928 00:21:11.740 killing process with pid 59928 00:21:11.740 11:04:36 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.740 11:04:36 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.740 11:04:36 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59928' 00:21:11.740 11:04:36 alias_rpc -- common/autotest_common.sh@973 -- # kill 59928 00:21:11.740 11:04:36 alias_rpc -- common/autotest_common.sh@978 -- # wait 59928 00:21:11.999 ************************************ 00:21:11.999 END TEST alias_rpc 00:21:11.999 ************************************ 00:21:11.999 00:21:11.999 real 0m1.857s 00:21:11.999 user 0m2.085s 00:21:11.999 sys 0m0.490s 00:21:11.999 11:04:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.999 11:04:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 11:04:36 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:21:11.999 11:04:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:11.999 11:04:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:11.999 11:04:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.999 11:04:36 -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 ************************************ 00:21:11.999 START TEST dpdk_mem_utility 00:21:11.999 ************************************ 00:21:11.999 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:11.999 * Looking for test storage... 00:21:11.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:21:11.999 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:11.999 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:21:11.999 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:12.258 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:12.258 11:04:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.258 11:04:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.258 11:04:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.258 11:04:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.258 11:04:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:21:12.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.259 11:04:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:12.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.259 --rc genhtml_branch_coverage=1 00:21:12.259 --rc genhtml_function_coverage=1 00:21:12.259 --rc genhtml_legend=1 00:21:12.259 --rc geninfo_all_blocks=1 00:21:12.259 --rc geninfo_unexecuted_blocks=1 00:21:12.259 00:21:12.259 ' 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:12.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.259 --rc genhtml_branch_coverage=1 00:21:12.259 --rc genhtml_function_coverage=1 00:21:12.259 --rc genhtml_legend=1 00:21:12.259 --rc geninfo_all_blocks=1 00:21:12.259 --rc geninfo_unexecuted_blocks=1 00:21:12.259 00:21:12.259 ' 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:12.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.259 --rc genhtml_branch_coverage=1 00:21:12.259 --rc genhtml_function_coverage=1 00:21:12.259 --rc genhtml_legend=1 00:21:12.259 --rc geninfo_all_blocks=1 00:21:12.259 --rc geninfo_unexecuted_blocks=1 00:21:12.259 00:21:12.259 ' 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:12.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.259 --rc genhtml_branch_coverage=1 00:21:12.259 --rc genhtml_function_coverage=1 00:21:12.259 --rc genhtml_legend=1 00:21:12.259 --rc geninfo_all_blocks=1 00:21:12.259 --rc geninfo_unexecuted_blocks=1 00:21:12.259 00:21:12.259 ' 00:21:12.259 11:04:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:12.259 11:04:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60022 00:21:12.259 11:04:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60022 00:21:12.259 11:04:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60022 ']' 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.259 11:04:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:12.259 [2024-12-05 11:04:36.797716] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:12.259 [2024-12-05 11:04:36.798106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60022 ] 00:21:12.518 [2024-12-05 11:04:36.949509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.518 [2024-12-05 11:04:37.014152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.452 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.452 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:21:13.452 11:04:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:21:13.452 11:04:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:21:13.452 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.452 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:13.452 { 00:21:13.452 "filename": "/tmp/spdk_mem_dump.txt" 00:21:13.452 } 00:21:13.452 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.452 11:04:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:13.452 DPDK memory size 818.000000 MiB in 1 heap(s) 00:21:13.452 1 heaps totaling size 818.000000 MiB 00:21:13.452 size: 818.000000 MiB heap id: 0 00:21:13.452 end heaps---------- 00:21:13.452 9 mempools totaling size 603.782043 MiB 00:21:13.452 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:21:13.452 size: 158.602051 MiB name: PDU_data_out_Pool 00:21:13.452 size: 100.555481 MiB name: bdev_io_60022 00:21:13.452 size: 50.003479 MiB name: msgpool_60022 00:21:13.452 size: 36.509338 MiB name: fsdev_io_60022 00:21:13.452 size: 21.763794 MiB name: PDU_Pool 00:21:13.452 size: 19.513306 MiB name: SCSI_TASK_Pool 00:21:13.452 size: 4.133484 MiB name: evtpool_60022 00:21:13.452 size: 0.026123 MiB name: Session_Pool 00:21:13.452 end mempools------- 00:21:13.452 6 memzones totaling size 4.142822 MiB 00:21:13.452 size: 1.000366 MiB name: RG_ring_0_60022 00:21:13.452 size: 1.000366 MiB name: RG_ring_1_60022 00:21:13.452 size: 1.000366 MiB name: RG_ring_4_60022 00:21:13.452 size: 1.000366 MiB name: RG_ring_5_60022 00:21:13.452 size: 0.125366 MiB name: RG_ring_2_60022 00:21:13.452 size: 0.015991 MiB name: RG_ring_3_60022 00:21:13.452 end memzones------- 00:21:13.452 11:04:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:21:13.452 heap id: 0 total size: 818.000000 MiB number of busy elements: 234 number of free elements: 15 00:21:13.452 list of free elements. size: 10.817688 MiB 00:21:13.452 element at address: 0x200019200000 with size: 0.999878 MiB 00:21:13.452 element at address: 0x200019400000 with size: 0.999878 MiB 00:21:13.452 element at address: 0x200000400000 with size: 0.996338 MiB 00:21:13.452 element at address: 0x200032000000 with size: 0.994446 MiB 00:21:13.452 element at address: 0x200006400000 with size: 0.959839 MiB 00:21:13.452 element at address: 0x200012c00000 with size: 0.944275 MiB 00:21:13.452 element at address: 0x200019600000 with size: 0.936584 MiB 00:21:13.452 element at address: 0x200000200000 with size: 0.717346 MiB 00:21:13.452 element at address: 0x20001ae00000 with size: 0.571899 MiB 00:21:13.452 element at address: 0x200000c00000 with size: 0.490662 MiB 00:21:13.452 element at address: 0x20000a600000 with size: 0.489441 MiB 00:21:13.452 element at address: 0x200019800000 with size: 0.485657 MiB 00:21:13.452 element at address: 0x200003e00000 with size: 0.481018 MiB 00:21:13.452 element at address: 0x200028200000 with size: 0.397034 MiB 00:21:13.452 element at address: 0x200000800000 with size: 0.353394 MiB 00:21:13.452 list of standard malloc elements. size: 199.253418 MiB 00:21:13.452 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:21:13.452 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:21:13.452 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:21:13.452 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:21:13.452 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:21:13.452 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:21:13.452 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:21:13.452 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:21:13.452 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:21:13.452 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000085a780 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000085a980 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f080 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f140 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f200 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f380 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f440 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f500 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x20000087f680 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:21:13.452 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:21:13.452 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000cff000 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200003efb980 with size: 0.000183 MiB 00:21:13.453 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200028265a40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x200028265b00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826c700 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826c900 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826d080 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826d140 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826d200 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826d380 with size: 0.000183 MiB 00:21:13.453 element at address: 0x20002826d440 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826d500 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826d680 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826d740 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826d800 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826d980 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826da40 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826db00 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826de00 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826df80 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e040 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e100 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e280 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e340 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e400 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e580 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e640 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e700 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e880 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826e940 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f000 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f180 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f240 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f300 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f480 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f540 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f600 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f780 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f840 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f900 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:21:13.454 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:21:13.454 list of memzone associated elements. size: 607.928894 MiB 00:21:13.454 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:21:13.454 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:21:13.454 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:21:13.454 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:21:13.454 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:21:13.454 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60022_0 00:21:13.454 element at address: 0x200000dff380 with size: 48.003052 MiB 00:21:13.454 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60022_0 00:21:13.454 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:21:13.454 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60022_0 00:21:13.454 element at address: 0x2000199be940 with size: 20.255554 MiB 00:21:13.454 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:21:13.454 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:21:13.454 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:21:13.454 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:21:13.454 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60022_0 00:21:13.454 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:21:13.454 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60022 00:21:13.454 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:21:13.454 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60022 00:21:13.454 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:21:13.454 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:21:13.454 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:21:13.454 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:21:13.454 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:21:13.454 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:21:13.454 element at address: 0x200003efba40 with size: 1.008118 MiB 00:21:13.454 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:21:13.454 element at address: 0x200000cff180 with size: 1.000488 MiB 00:21:13.454 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60022 00:21:13.454 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:21:13.454 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60022 00:21:13.454 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:21:13.454 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60022 00:21:13.454 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:21:13.454 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60022 00:21:13.454 element at address: 0x20000087f740 with size: 0.500488 MiB 00:21:13.454 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60022 00:21:13.454 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:21:13.454 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60022 00:21:13.454 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:21:13.454 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:21:13.454 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:21:13.454 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:21:13.454 element at address: 0x20001987c540 with size: 0.250488 MiB 00:21:13.454 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:21:13.454 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:21:13.454 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60022 00:21:13.454 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:21:13.454 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60022 00:21:13.454 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:21:13.454 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:21:13.455 element at address: 0x200028265bc0 with size: 0.023743 MiB 00:21:13.455 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:21:13.455 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:21:13.455 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60022 00:21:13.455 element at address: 0x20002826bd00 with size: 0.002441 MiB 00:21:13.455 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:21:13.455 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:21:13.455 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60022 00:21:13.455 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:21:13.455 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60022 00:21:13.455 element at address: 0x20000085a840 with size: 0.000305 MiB 00:21:13.455 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60022 00:21:13.455 element at address: 0x20002826c7c0 with size: 0.000305 MiB 00:21:13.455 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:21:13.455 11:04:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:21:13.455 11:04:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60022 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60022 ']' 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60022 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60022 00:21:13.455 killing process with pid 60022 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60022' 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60022 00:21:13.455 11:04:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60022 00:21:13.759 ************************************ 00:21:13.759 END TEST dpdk_mem_utility 00:21:13.759 ************************************ 00:21:13.759 00:21:13.759 real 0m1.736s 00:21:13.759 user 0m1.856s 00:21:13.759 sys 0m0.455s 00:21:13.759 11:04:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.759 11:04:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 11:04:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:13.759 11:04:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:13.759 11:04:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.759 11:04:38 -- common/autotest_common.sh@10 -- # set +x 00:21:13.759 ************************************ 00:21:13.759 START TEST event 00:21:13.759 ************************************ 00:21:13.759 11:04:38 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:13.759 * Looking for test storage... 00:21:14.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1711 -- # lcov --version 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:14.018 11:04:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.018 11:04:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.018 11:04:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.018 11:04:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.018 11:04:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.018 11:04:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.018 11:04:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.018 11:04:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.018 11:04:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.018 11:04:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.018 11:04:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.018 11:04:38 event -- scripts/common.sh@344 -- # case "$op" in 00:21:14.018 11:04:38 event -- scripts/common.sh@345 -- # : 1 00:21:14.018 11:04:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.018 11:04:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.018 11:04:38 event -- scripts/common.sh@365 -- # decimal 1 00:21:14.018 11:04:38 event -- scripts/common.sh@353 -- # local d=1 00:21:14.018 11:04:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.018 11:04:38 event -- scripts/common.sh@355 -- # echo 1 00:21:14.018 11:04:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.018 11:04:38 event -- scripts/common.sh@366 -- # decimal 2 00:21:14.018 11:04:38 event -- scripts/common.sh@353 -- # local d=2 00:21:14.018 11:04:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.018 11:04:38 event -- scripts/common.sh@355 -- # echo 2 00:21:14.018 11:04:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.018 11:04:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.018 11:04:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.018 11:04:38 event -- scripts/common.sh@368 -- # return 0 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.018 --rc genhtml_branch_coverage=1 00:21:14.018 --rc genhtml_function_coverage=1 00:21:14.018 --rc genhtml_legend=1 00:21:14.018 --rc geninfo_all_blocks=1 00:21:14.018 --rc geninfo_unexecuted_blocks=1 00:21:14.018 00:21:14.018 ' 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.018 --rc genhtml_branch_coverage=1 00:21:14.018 --rc genhtml_function_coverage=1 00:21:14.018 --rc genhtml_legend=1 00:21:14.018 --rc geninfo_all_blocks=1 00:21:14.018 --rc geninfo_unexecuted_blocks=1 00:21:14.018 00:21:14.018 ' 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.018 --rc genhtml_branch_coverage=1 00:21:14.018 --rc genhtml_function_coverage=1 00:21:14.018 --rc genhtml_legend=1 00:21:14.018 --rc geninfo_all_blocks=1 00:21:14.018 --rc geninfo_unexecuted_blocks=1 00:21:14.018 00:21:14.018 ' 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:14.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.018 --rc genhtml_branch_coverage=1 00:21:14.018 --rc genhtml_function_coverage=1 00:21:14.018 --rc genhtml_legend=1 00:21:14.018 --rc geninfo_all_blocks=1 00:21:14.018 --rc geninfo_unexecuted_blocks=1 00:21:14.018 00:21:14.018 ' 00:21:14.018 11:04:38 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:14.018 11:04:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:21:14.018 11:04:38 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:21:14.018 11:04:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.018 11:04:38 event -- common/autotest_common.sh@10 -- # set +x 00:21:14.018 ************************************ 00:21:14.018 START TEST event_perf 00:21:14.018 ************************************ 00:21:14.018 11:04:38 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:14.018 Running I/O for 1 seconds...[2024-12-05 11:04:38.541905] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:14.018 [2024-12-05 11:04:38.542138] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60121 ] 00:21:14.275 [2024-12-05 11:04:38.701655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.275 [2024-12-05 11:04:38.771931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.275 [2024-12-05 11:04:38.772022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.275 [2024-12-05 11:04:38.772150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.275 [2024-12-05 11:04:38.772159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.209 Running I/O for 1 seconds... 00:21:15.209 lcore 0: 182829 00:21:15.209 lcore 1: 182828 00:21:15.209 lcore 2: 182828 00:21:15.209 lcore 3: 182828 00:21:15.209 done. 00:21:15.209 00:21:15.209 real 0m1.300s 00:21:15.209 user 0m4.111s 00:21:15.209 sys 0m0.061s 00:21:15.209 11:04:39 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.209 ************************************ 00:21:15.209 END TEST event_perf 00:21:15.209 ************************************ 00:21:15.209 11:04:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:21:15.466 11:04:39 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:15.466 11:04:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:15.466 11:04:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.466 11:04:39 event -- common/autotest_common.sh@10 -- # set +x 00:21:15.466 ************************************ 00:21:15.466 START TEST event_reactor 00:21:15.466 ************************************ 00:21:15.466 11:04:39 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:15.466 [2024-12-05 11:04:39.893147] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:15.466 [2024-12-05 11:04:39.893221] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:21:15.466 [2024-12-05 11:04:40.033256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.466 [2024-12-05 11:04:40.087569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.839 test_start 00:21:16.839 oneshot 00:21:16.839 tick 100 00:21:16.839 tick 100 00:21:16.839 tick 250 00:21:16.839 tick 100 00:21:16.839 tick 100 00:21:16.839 tick 250 00:21:16.839 tick 100 00:21:16.839 tick 500 00:21:16.839 tick 100 00:21:16.839 tick 100 00:21:16.839 tick 250 00:21:16.839 tick 100 00:21:16.839 tick 100 00:21:16.839 test_end 00:21:16.839 00:21:16.839 real 0m1.258s 00:21:16.839 user 0m1.113s 00:21:16.839 sys 0m0.039s 00:21:16.839 11:04:41 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.840 ************************************ 00:21:16.840 END TEST event_reactor 00:21:16.840 11:04:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:21:16.840 ************************************ 00:21:16.840 11:04:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:16.840 11:04:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:16.840 11:04:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.840 11:04:41 event -- common/autotest_common.sh@10 -- # set +x 00:21:16.840 ************************************ 00:21:16.840 START TEST event_reactor_perf 00:21:16.840 ************************************ 00:21:16.840 11:04:41 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:16.840 [2024-12-05 11:04:41.206505] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:16.840 [2024-12-05 11:04:41.206760] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:21:16.840 [2024-12-05 11:04:41.356535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.840 [2024-12-05 11:04:41.419578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.236 test_start 00:21:18.236 test_end 00:21:18.236 Performance: 430279 events per second 00:21:18.236 00:21:18.236 real 0m1.277s 00:21:18.236 user 0m1.131s 00:21:18.236 sys 0m0.040s 00:21:18.236 11:04:42 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.236 ************************************ 00:21:18.236 END TEST event_reactor_perf 00:21:18.236 ************************************ 00:21:18.236 11:04:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.236 11:04:42 event -- event/event.sh@49 -- # uname -s 00:21:18.236 11:04:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:21:18.236 11:04:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:18.236 11:04:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:18.236 11:04:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.236 11:04:42 event -- common/autotest_common.sh@10 -- # set +x 00:21:18.236 ************************************ 00:21:18.236 START TEST event_scheduler 00:21:18.236 ************************************ 00:21:18.236 11:04:42 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:18.236 * Looking for test storage... 00:21:18.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:21:18.236 11:04:42 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:18.236 11:04:42 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:18.236 11:04:42 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:21:18.236 11:04:42 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:18.236 11:04:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.236 11:04:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.237 11:04:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.237 --rc genhtml_branch_coverage=1 00:21:18.237 --rc genhtml_function_coverage=1 00:21:18.237 --rc genhtml_legend=1 00:21:18.237 --rc geninfo_all_blocks=1 00:21:18.237 --rc geninfo_unexecuted_blocks=1 00:21:18.237 00:21:18.237 ' 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.237 --rc genhtml_branch_coverage=1 00:21:18.237 --rc genhtml_function_coverage=1 00:21:18.237 --rc genhtml_legend=1 00:21:18.237 --rc geninfo_all_blocks=1 00:21:18.237 --rc geninfo_unexecuted_blocks=1 00:21:18.237 00:21:18.237 ' 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.237 --rc genhtml_branch_coverage=1 00:21:18.237 --rc genhtml_function_coverage=1 00:21:18.237 --rc genhtml_legend=1 00:21:18.237 --rc geninfo_all_blocks=1 00:21:18.237 --rc geninfo_unexecuted_blocks=1 00:21:18.237 00:21:18.237 ' 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:18.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.237 --rc genhtml_branch_coverage=1 00:21:18.237 --rc genhtml_function_coverage=1 00:21:18.237 --rc genhtml_legend=1 00:21:18.237 --rc geninfo_all_blocks=1 00:21:18.237 --rc geninfo_unexecuted_blocks=1 00:21:18.237 00:21:18.237 ' 00:21:18.237 11:04:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:21:18.237 11:04:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60259 00:21:18.237 11:04:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:21:18.237 11:04:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60259 00:21:18.237 11:04:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60259 ']' 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.237 11:04:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:18.237 [2024-12-05 11:04:42.778504] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:18.237 [2024-12-05 11:04:42.779077] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60259 ] 00:21:18.496 [2024-12-05 11:04:42.937157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.496 [2024-12-05 11:04:43.009121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.496 [2024-12-05 11:04:43.009260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.496 [2024-12-05 11:04:43.009261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.496 [2024-12-05 11:04:43.009194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:21:19.430 11:04:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.430 POWER: Cannot set governor of lcore 0 to userspace 00:21:19.430 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.430 POWER: Cannot set governor of lcore 0 to performance 00:21:19.430 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.430 POWER: Cannot set governor of lcore 0 to userspace 00:21:19.430 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:19.430 POWER: Cannot set governor of lcore 0 to userspace 00:21:19.430 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:21:19.430 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:21:19.430 POWER: Unable to set Power Management Environment for lcore 0 00:21:19.430 [2024-12-05 11:04:43.843808] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:21:19.430 [2024-12-05 11:04:43.843828] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:21:19.430 [2024-12-05 11:04:43.843838] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:21:19.430 [2024-12-05 11:04:43.843851] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:21:19.430 [2024-12-05 11:04:43.843858] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:21:19.430 [2024-12-05 11:04:43.843866] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.430 11:04:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 [2024-12-05 11:04:43.921672] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.430 11:04:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.430 11:04:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:19.430 ************************************ 00:21:19.430 START TEST scheduler_create_thread 00:21:19.430 ************************************ 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 2 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 3 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 4 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 5 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 6 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 7 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 8 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 9 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 10 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.431 11:04:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:21.332 11:04:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.332 11:04:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:21:21.332 11:04:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:21:21.332 11:04:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.332 11:04:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:21.934 11:04:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.934 ************************************ 00:21:21.934 END TEST scheduler_create_thread 00:21:21.934 ************************************ 00:21:21.934 00:21:21.934 real 0m2.613s 00:21:21.934 user 0m0.016s 00:21:21.934 sys 0m0.008s 00:21:21.934 11:04:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.934 11:04:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:22.192 11:04:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:22.192 11:04:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60259 00:21:22.192 11:04:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60259 ']' 00:21:22.192 11:04:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60259 00:21:22.192 11:04:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:21:22.192 11:04:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.192 11:04:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60259 00:21:22.193 killing process with pid 60259 00:21:22.193 11:04:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:22.193 11:04:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:22.193 11:04:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60259' 00:21:22.193 11:04:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60259 00:21:22.193 11:04:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60259 00:21:22.451 [2024-12-05 11:04:47.025257] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:21:22.710 ************************************ 00:21:22.710 END TEST event_scheduler 00:21:22.710 ************************************ 00:21:22.710 00:21:22.710 real 0m4.686s 00:21:22.710 user 0m9.036s 00:21:22.710 sys 0m0.444s 00:21:22.710 11:04:47 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.710 11:04:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:22.710 11:04:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:21:22.710 11:04:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:21:22.710 11:04:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:22.710 11:04:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.710 11:04:47 event -- common/autotest_common.sh@10 -- # set +x 00:21:22.710 ************************************ 00:21:22.710 START TEST app_repeat 00:21:22.710 ************************************ 00:21:22.710 11:04:47 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:21:22.710 Process app_repeat pid: 60382 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60382 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60382' 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:22.710 spdk_app_start Round 0 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:21:22.710 11:04:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60382 /var/tmp/spdk-nbd.sock 00:21:22.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:22.710 11:04:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60382 ']' 00:21:22.710 11:04:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:22.710 11:04:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.710 11:04:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:22.710 11:04:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.710 11:04:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:22.710 [2024-12-05 11:04:47.312497] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:22.710 [2024-12-05 11:04:47.312832] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60382 ] 00:21:22.968 [2024-12-05 11:04:47.466009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:22.968 [2024-12-05 11:04:47.534433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.968 [2024-12-05 11:04:47.534437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.904 11:04:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.904 11:04:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:23.904 11:04:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:24.163 Malloc0 00:21:24.163 11:04:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:24.422 Malloc1 00:21:24.422 11:04:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:24.422 11:04:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:24.423 11:04:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:24.423 11:04:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:24.423 11:04:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:24.423 11:04:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:24.423 11:04:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:24.682 /dev/nbd0 00:21:24.682 11:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:24.682 11:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:24.682 1+0 records in 00:21:24.682 1+0 records out 00:21:24.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259074 s, 15.8 MB/s 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:24.682 11:04:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:24.682 11:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:24.682 11:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:24.682 11:04:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:24.941 /dev/nbd1 00:21:25.201 11:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:25.201 11:04:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:25.201 1+0 records in 00:21:25.201 1+0 records out 00:21:25.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196125 s, 20.9 MB/s 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:25.201 11:04:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:25.201 11:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.201 11:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.201 11:04:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:25.201 11:04:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:25.201 11:04:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:25.460 { 00:21:25.460 "bdev_name": "Malloc0", 00:21:25.460 "nbd_device": "/dev/nbd0" 00:21:25.460 }, 00:21:25.460 { 00:21:25.460 "bdev_name": "Malloc1", 00:21:25.460 "nbd_device": "/dev/nbd1" 00:21:25.460 } 00:21:25.460 ]' 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:25.460 { 00:21:25.460 "bdev_name": "Malloc0", 00:21:25.460 "nbd_device": "/dev/nbd0" 00:21:25.460 }, 00:21:25.460 { 00:21:25.460 "bdev_name": "Malloc1", 00:21:25.460 "nbd_device": "/dev/nbd1" 00:21:25.460 } 00:21:25.460 ]' 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:25.460 /dev/nbd1' 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:25.460 /dev/nbd1' 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:25.460 11:04:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:25.460 256+0 records in 00:21:25.460 256+0 records out 00:21:25.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00864083 s, 121 MB/s 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:25.460 256+0 records in 00:21:25.460 256+0 records out 00:21:25.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027682 s, 37.9 MB/s 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:25.460 256+0 records in 00:21:25.460 256+0 records out 00:21:25.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232216 s, 45.2 MB/s 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.460 11:04:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:26.082 11:04:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:26.340 11:04:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:26.340 11:04:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:26.907 11:04:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:26.907 [2024-12-05 11:04:51.391614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:26.907 [2024-12-05 11:04:51.444263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.907 [2024-12-05 11:04:51.444267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.907 [2024-12-05 11:04:51.487262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:26.907 [2024-12-05 11:04:51.487314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:30.195 11:04:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:30.195 spdk_app_start Round 1 00:21:30.195 11:04:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:21:30.195 11:04:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60382 /var/tmp/spdk-nbd.sock 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60382 ']' 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.195 11:04:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:30.195 11:04:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:30.455 Malloc0 00:21:30.455 11:04:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:30.713 Malloc1 00:21:30.713 11:04:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:30.713 11:04:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:30.714 11:04:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:30.714 11:04:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:30.714 11:04:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:30.714 11:04:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:30.714 11:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:30.714 11:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:30.714 11:04:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:30.714 /dev/nbd0 00:21:30.971 11:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:30.971 11:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:30.971 1+0 records in 00:21:30.971 1+0 records out 00:21:30.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311486 s, 13.1 MB/s 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:30.971 11:04:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:30.971 11:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:30.971 11:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:30.971 11:04:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:31.229 /dev/nbd1 00:21:31.229 11:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:31.229 11:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:31.229 1+0 records in 00:21:31.229 1+0 records out 00:21:31.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338381 s, 12.1 MB/s 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:31.229 11:04:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:31.229 11:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:31.229 11:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:31.229 11:04:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:31.229 11:04:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:31.229 11:04:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:31.488 11:04:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:31.488 { 00:21:31.488 "bdev_name": "Malloc0", 00:21:31.488 "nbd_device": "/dev/nbd0" 00:21:31.488 }, 00:21:31.488 { 00:21:31.488 "bdev_name": "Malloc1", 00:21:31.488 "nbd_device": "/dev/nbd1" 00:21:31.488 } 00:21:31.488 ]' 00:21:31.488 11:04:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:31.488 { 00:21:31.488 "bdev_name": "Malloc0", 00:21:31.488 "nbd_device": "/dev/nbd0" 00:21:31.488 }, 00:21:31.488 { 00:21:31.488 "bdev_name": "Malloc1", 00:21:31.488 "nbd_device": "/dev/nbd1" 00:21:31.488 } 00:21:31.488 ]' 00:21:31.488 11:04:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:31.488 /dev/nbd1' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:31.488 /dev/nbd1' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:31.488 256+0 records in 00:21:31.488 256+0 records out 00:21:31.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00821582 s, 128 MB/s 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:31.488 256+0 records in 00:21:31.488 256+0 records out 00:21:31.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228291 s, 45.9 MB/s 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:31.488 256+0 records in 00:21:31.488 256+0 records out 00:21:31.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262321 s, 40.0 MB/s 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:31.488 11:04:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:31.747 11:04:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:32.314 11:04:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:32.573 11:04:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:32.573 11:04:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:32.832 11:04:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:33.091 [2024-12-05 11:04:57.548628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:33.091 [2024-12-05 11:04:57.601117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.091 [2024-12-05 11:04:57.601123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.091 [2024-12-05 11:04:57.644993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:33.091 [2024-12-05 11:04:57.645040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:36.378 spdk_app_start Round 2 00:21:36.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:36.378 11:05:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:36.378 11:05:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:21:36.378 11:05:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60382 /var/tmp/spdk-nbd.sock 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60382 ']' 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.378 11:05:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:36.378 11:05:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:36.378 Malloc0 00:21:36.378 11:05:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:36.637 Malloc1 00:21:36.637 11:05:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:36.637 11:05:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:36.637 11:05:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:36.637 11:05:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:36.637 11:05:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:36.637 11:05:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:36.637 11:05:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.638 11:05:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:36.902 /dev/nbd0 00:21:36.902 11:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.902 11:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:36.902 1+0 records in 00:21:36.902 1+0 records out 00:21:36.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273755 s, 15.0 MB/s 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:36.902 11:05:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:36.902 11:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.902 11:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.902 11:05:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:37.160 /dev/nbd1 00:21:37.160 11:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:37.160 11:05:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:37.160 1+0 records in 00:21:37.160 1+0 records out 00:21:37.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379033 s, 10.8 MB/s 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:37.160 11:05:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:37.160 11:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.160 11:05:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:37.160 11:05:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:37.160 11:05:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:37.160 11:05:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:37.726 { 00:21:37.726 "bdev_name": "Malloc0", 00:21:37.726 "nbd_device": "/dev/nbd0" 00:21:37.726 }, 00:21:37.726 { 00:21:37.726 "bdev_name": "Malloc1", 00:21:37.726 "nbd_device": "/dev/nbd1" 00:21:37.726 } 00:21:37.726 ]' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:37.726 { 00:21:37.726 "bdev_name": "Malloc0", 00:21:37.726 "nbd_device": "/dev/nbd0" 00:21:37.726 }, 00:21:37.726 { 00:21:37.726 "bdev_name": "Malloc1", 00:21:37.726 "nbd_device": "/dev/nbd1" 00:21:37.726 } 00:21:37.726 ]' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:37.726 /dev/nbd1' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:37.726 /dev/nbd1' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:37.726 256+0 records in 00:21:37.726 256+0 records out 00:21:37.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700092 s, 150 MB/s 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:37.726 256+0 records in 00:21:37.726 256+0 records out 00:21:37.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210023 s, 49.9 MB/s 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:37.726 256+0 records in 00:21:37.726 256+0 records out 00:21:37.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023972 s, 43.7 MB/s 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.726 11:05:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.984 11:05:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:38.331 11:05:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:38.591 11:05:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:38.591 11:05:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:39.160 11:05:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:39.160 [2024-12-05 11:05:03.677644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:39.160 [2024-12-05 11:05:03.731650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.160 [2024-12-05 11:05:03.731654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.160 [2024-12-05 11:05:03.774357] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:39.160 [2024-12-05 11:05:03.774408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:42.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:42.446 11:05:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60382 /var/tmp/spdk-nbd.sock 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60382 ']' 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:42.446 11:05:06 event.app_repeat -- event/event.sh@39 -- # killprocess 60382 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60382 ']' 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60382 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60382 00:21:42.446 killing process with pid 60382 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60382' 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60382 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60382 00:21:42.446 spdk_app_start is called in Round 0. 00:21:42.446 Shutdown signal received, stop current app iteration 00:21:42.446 Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 reinitialization... 00:21:42.446 spdk_app_start is called in Round 1. 00:21:42.446 Shutdown signal received, stop current app iteration 00:21:42.446 Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 reinitialization... 00:21:42.446 spdk_app_start is called in Round 2. 00:21:42.446 Shutdown signal received, stop current app iteration 00:21:42.446 Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 reinitialization... 00:21:42.446 spdk_app_start is called in Round 3. 00:21:42.446 Shutdown signal received, stop current app iteration 00:21:42.446 11:05:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:21:42.446 11:05:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:21:42.446 00:21:42.446 real 0m19.709s 00:21:42.446 user 0m44.197s 00:21:42.446 sys 0m3.765s 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.446 11:05:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:42.446 ************************************ 00:21:42.446 END TEST app_repeat 00:21:42.446 ************************************ 00:21:42.446 11:05:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:21:42.446 11:05:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:21:42.446 11:05:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:42.446 11:05:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.446 11:05:07 event -- common/autotest_common.sh@10 -- # set +x 00:21:42.446 ************************************ 00:21:42.446 START TEST cpu_locks 00:21:42.446 ************************************ 00:21:42.446 11:05:07 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:21:42.705 * Looking for test storage... 00:21:42.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.705 11:05:07 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.705 --rc genhtml_branch_coverage=1 00:21:42.705 --rc genhtml_function_coverage=1 00:21:42.705 --rc genhtml_legend=1 00:21:42.705 --rc geninfo_all_blocks=1 00:21:42.705 --rc geninfo_unexecuted_blocks=1 00:21:42.705 00:21:42.705 ' 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.705 --rc genhtml_branch_coverage=1 00:21:42.705 --rc genhtml_function_coverage=1 00:21:42.705 --rc genhtml_legend=1 00:21:42.705 --rc geninfo_all_blocks=1 00:21:42.705 --rc geninfo_unexecuted_blocks=1 00:21:42.705 00:21:42.705 ' 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.705 --rc genhtml_branch_coverage=1 00:21:42.705 --rc genhtml_function_coverage=1 00:21:42.705 --rc genhtml_legend=1 00:21:42.705 --rc geninfo_all_blocks=1 00:21:42.705 --rc geninfo_unexecuted_blocks=1 00:21:42.705 00:21:42.705 ' 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:42.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.705 --rc genhtml_branch_coverage=1 00:21:42.705 --rc genhtml_function_coverage=1 00:21:42.705 --rc genhtml_legend=1 00:21:42.705 --rc geninfo_all_blocks=1 00:21:42.705 --rc geninfo_unexecuted_blocks=1 00:21:42.705 00:21:42.705 ' 00:21:42.705 11:05:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:21:42.705 11:05:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:21:42.705 11:05:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:21:42.705 11:05:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.705 11:05:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:42.705 ************************************ 00:21:42.705 START TEST default_locks 00:21:42.705 ************************************ 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61021 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61021 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61021 ']' 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.705 11:05:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:21:42.705 [2024-12-05 11:05:07.319413] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:42.705 [2024-12-05 11:05:07.319517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61021 ] 00:21:42.964 [2024-12-05 11:05:07.460734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.964 [2024-12-05 11:05:07.532797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.899 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.899 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:21:43.899 11:05:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61021 00:21:43.899 11:05:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61021 00:21:43.899 11:05:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:44.158 11:05:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61021 00:21:44.158 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 61021 ']' 00:21:44.158 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 61021 00:21:44.158 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:21:44.158 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.158 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61021 00:21:44.469 killing process with pid 61021 00:21:44.469 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:44.469 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:44.469 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61021' 00:21:44.469 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 61021 00:21:44.469 11:05:08 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 61021 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61021 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61021 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 61021 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 61021 ']' 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:21:44.727 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61021) - No such process 00:21:44.727 ERROR: process (pid: 61021) is no longer running 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:21:44.727 ************************************ 00:21:44.727 END TEST default_locks 00:21:44.727 ************************************ 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:21:44.727 00:21:44.727 real 0m1.894s 00:21:44.727 user 0m2.025s 00:21:44.727 sys 0m0.635s 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.727 11:05:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:21:44.727 11:05:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:21:44.727 11:05:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:44.727 11:05:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.727 11:05:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:44.727 ************************************ 00:21:44.727 START TEST default_locks_via_rpc 00:21:44.727 ************************************ 00:21:44.727 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:21:44.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61085 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61085 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61085 ']' 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.728 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:44.728 [2024-12-05 11:05:09.282541] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:44.728 [2024-12-05 11:05:09.282922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61085 ] 00:21:44.987 [2024-12-05 11:05:09.432670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.987 [2024-12-05 11:05:09.482558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:21:45.246 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.247 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:45.247 11:05:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.247 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61085 00:21:45.247 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61085 00:21:45.247 11:05:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61085 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61085 ']' 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61085 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61085 00:21:45.814 killing process with pid 61085 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61085' 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61085 00:21:45.814 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61085 00:21:46.073 ************************************ 00:21:46.073 END TEST default_locks_via_rpc 00:21:46.073 ************************************ 00:21:46.073 00:21:46.073 real 0m1.438s 00:21:46.073 user 0m1.464s 00:21:46.073 sys 0m0.590s 00:21:46.073 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.074 11:05:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:46.074 11:05:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:21:46.074 11:05:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:46.074 11:05:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.074 11:05:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:46.074 ************************************ 00:21:46.074 START TEST non_locking_app_on_locked_coremask 00:21:46.074 ************************************ 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:21:46.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61141 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61141 /var/tmp/spdk.sock 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61141 ']' 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.074 11:05:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:46.333 [2024-12-05 11:05:10.774173] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:46.333 [2024-12-05 11:05:10.774528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61141 ] 00:21:46.333 [2024-12-05 11:05:10.925953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.333 [2024-12-05 11:05:10.982623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61155 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61155 /var/tmp/spdk2.sock 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61155 ']' 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:46.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.592 11:05:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:46.852 [2024-12-05 11:05:11.274413] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:46.852 [2024-12-05 11:05:11.274520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:21:46.852 [2024-12-05 11:05:11.431663] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:46.852 [2024-12-05 11:05:11.431714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.111 [2024-12-05 11:05:11.540658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.119 11:05:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.119 11:05:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:48.119 11:05:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61141 00:21:48.119 11:05:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61141 00:21:48.119 11:05:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61141 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61141 ']' 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61141 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61141 00:21:49.055 killing process with pid 61141 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61141' 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61141 00:21:49.055 11:05:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61141 00:21:49.620 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61155 00:21:49.620 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61155 ']' 00:21:49.620 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61155 00:21:49.620 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:49.620 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.620 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61155 00:21:49.621 killing process with pid 61155 00:21:49.621 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.621 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.621 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61155' 00:21:49.621 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61155 00:21:49.621 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61155 00:21:49.878 00:21:49.878 real 0m3.690s 00:21:49.878 user 0m4.160s 00:21:49.878 sys 0m1.226s 00:21:49.878 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.878 ************************************ 00:21:49.878 END TEST non_locking_app_on_locked_coremask 00:21:49.878 ************************************ 00:21:49.878 11:05:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:49.878 11:05:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:21:49.878 11:05:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:49.878 11:05:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.878 11:05:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:49.878 ************************************ 00:21:49.878 START TEST locking_app_on_unlocked_coremask 00:21:49.878 ************************************ 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61234 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61234 /var/tmp/spdk.sock 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61234 ']' 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.878 11:05:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:50.135 [2024-12-05 11:05:14.532566] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:50.135 [2024-12-05 11:05:14.532690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61234 ] 00:21:50.135 [2024-12-05 11:05:14.677073] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:50.135 [2024-12-05 11:05:14.677125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.135 [2024-12-05 11:05:14.729690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.068 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.068 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:51.068 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:21:51.068 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61262 00:21:51.069 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61262 /var/tmp/spdk2.sock 00:21:51.069 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61262 ']' 00:21:51.069 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:51.069 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.069 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:51.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:51.069 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.069 11:05:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:51.069 [2024-12-05 11:05:15.522617] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:51.069 [2024-12-05 11:05:15.522942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61262 ] 00:21:51.069 [2024-12-05 11:05:15.688621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.327 [2024-12-05 11:05:15.797627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.945 11:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.945 11:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:51.945 11:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61262 00:21:51.945 11:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61262 00:21:51.945 11:05:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61234 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61234 ']' 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61234 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61234 00:21:53.327 killing process with pid 61234 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61234' 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61234 00:21:53.327 11:05:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61234 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61262 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61262 ']' 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61262 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61262 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61262' 00:21:53.894 killing process with pid 61262 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61262 00:21:53.894 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61262 00:21:54.153 ************************************ 00:21:54.153 END TEST locking_app_on_unlocked_coremask 00:21:54.153 ************************************ 00:21:54.153 00:21:54.153 real 0m4.148s 00:21:54.153 user 0m4.725s 00:21:54.153 sys 0m1.242s 00:21:54.153 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.153 11:05:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:54.153 11:05:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:21:54.153 11:05:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:54.153 11:05:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.153 11:05:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:54.153 ************************************ 00:21:54.153 START TEST locking_app_on_locked_coremask 00:21:54.153 ************************************ 00:21:54.153 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:21:54.153 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61341 00:21:54.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.153 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61341 /var/tmp/spdk.sock 00:21:54.153 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:54.153 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61341 ']' 00:21:54.154 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.154 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.154 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.154 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.154 11:05:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:54.154 [2024-12-05 11:05:18.774309] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:54.154 [2024-12-05 11:05:18.774466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61341 ] 00:21:54.414 [2024-12-05 11:05:18.934583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.414 [2024-12-05 11:05:18.989691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61369 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61369 /var/tmp/spdk2.sock 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61369 /var/tmp/spdk2.sock 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61369 /var/tmp/spdk2.sock 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61369 ']' 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.362 11:05:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:55.362 [2024-12-05 11:05:19.750219] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:55.362 [2024-12-05 11:05:19.750339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61369 ] 00:21:55.362 [2024-12-05 11:05:19.910558] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61341 has claimed it. 00:21:55.362 [2024-12-05 11:05:19.910631] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:21:55.929 ERROR: process (pid: 61369) is no longer running 00:21:55.929 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61369) - No such process 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61341 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61341 00:21:55.929 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:56.497 11:05:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61341 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61341 ']' 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61341 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61341 00:21:56.497 killing process with pid 61341 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61341' 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61341 00:21:56.497 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61341 00:21:56.755 ************************************ 00:21:56.755 END TEST locking_app_on_locked_coremask 00:21:56.755 ************************************ 00:21:56.755 00:21:56.755 real 0m2.694s 00:21:56.755 user 0m3.155s 00:21:56.755 sys 0m0.728s 00:21:56.755 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.755 11:05:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:57.015 11:05:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:21:57.015 11:05:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:57.015 11:05:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.015 11:05:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:57.015 ************************************ 00:21:57.015 START TEST locking_overlapped_coremask 00:21:57.015 ************************************ 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61425 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61425 /var/tmp/spdk.sock 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61425 ']' 00:21:57.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.015 11:05:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:57.015 [2024-12-05 11:05:21.504088] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:57.015 [2024-12-05 11:05:21.504203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61425 ] 00:21:57.015 [2024-12-05 11:05:21.661010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:57.273 [2024-12-05 11:05:21.730749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.273 [2024-12-05 11:05:21.730927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.273 [2024-12-05 11:05:21.730930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61451 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61451 /var/tmp/spdk2.sock 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61451 /var/tmp/spdk2.sock 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61451 /var/tmp/spdk2.sock 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61451 ']' 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:57.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.840 11:05:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 [2024-12-05 11:05:22.498944] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:58.098 [2024-12-05 11:05:22.499205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61451 ] 00:21:58.098 [2024-12-05 11:05:22.652150] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61425 has claimed it. 00:21:58.098 [2024-12-05 11:05:22.652218] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:21:58.664 ERROR: process (pid: 61451) is no longer running 00:21:58.664 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61451) - No such process 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61425 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61425 ']' 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61425 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61425 00:21:58.664 killing process with pid 61425 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61425' 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61425 00:21:58.664 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61425 00:21:59.233 ************************************ 00:21:59.233 END TEST locking_overlapped_coremask 00:21:59.233 ************************************ 00:21:59.233 00:21:59.233 real 0m2.171s 00:21:59.233 user 0m6.107s 00:21:59.233 sys 0m0.438s 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:59.233 11:05:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:21:59.233 11:05:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:59.233 11:05:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.233 11:05:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:59.233 ************************************ 00:21:59.233 START TEST locking_overlapped_coremask_via_rpc 00:21:59.233 ************************************ 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61503 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61503 /var/tmp/spdk.sock 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61503 ']' 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.233 11:05:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.233 [2024-12-05 11:05:23.709990] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:59.233 [2024-12-05 11:05:23.710071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61503 ] 00:21:59.233 [2024-12-05 11:05:23.855591] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:59.233 [2024-12-05 11:05:23.855646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:59.493 [2024-12-05 11:05:23.907574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.493 [2024-12-05 11:05:23.907667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.493 [2024-12-05 11:05:23.907671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61520 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61520 /var/tmp/spdk2.sock 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61520 ']' 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:59.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.493 11:05:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:59.752 [2024-12-05 11:05:24.209619] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:21:59.752 [2024-12-05 11:05:24.210438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61520 ] 00:21:59.752 [2024-12-05 11:05:24.371724] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:59.752 [2024-12-05 11:05:24.371769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:00.011 [2024-12-05 11:05:24.490032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.011 [2024-12-05 11:05:24.495709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.011 [2024-12-05 11:05:24.495710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:22:00.578 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:00.579 [2024-12-05 11:05:25.178735] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61503 has claimed it. 00:22:00.579 2024/12/05 11:05:25 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:22:00.579 request: 00:22:00.579 { 00:22:00.579 "method": "framework_enable_cpumask_locks", 00:22:00.579 "params": {} 00:22:00.579 } 00:22:00.579 Got JSON-RPC error response 00:22:00.579 GoRPCClient: error on JSON-RPC call 00:22:00.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61503 /var/tmp/spdk.sock 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61503 ']' 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.579 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61520 /var/tmp/spdk2.sock 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61520 ']' 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:00.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.873 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:01.440 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.441 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:01.441 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:22:01.441 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:22:01.441 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:22:01.441 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:22:01.441 00:22:01.441 real 0m2.156s 00:22:01.441 user 0m1.202s 00:22:01.441 sys 0m0.236s 00:22:01.441 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.441 ************************************ 00:22:01.441 END TEST locking_overlapped_coremask_via_rpc 00:22:01.441 ************************************ 00:22:01.441 11:05:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:01.441 11:05:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:22:01.441 11:05:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61503 ]] 00:22:01.441 11:05:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61503 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61503 ']' 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61503 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61503 00:22:01.441 killing process with pid 61503 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61503' 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61503 00:22:01.441 11:05:25 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61503 00:22:01.699 11:05:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61520 ]] 00:22:01.699 11:05:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61520 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61520 ']' 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61520 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61520 00:22:01.699 killing process with pid 61520 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61520' 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61520 00:22:01.699 11:05:26 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61520 00:22:01.958 11:05:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:22:01.958 11:05:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:22:01.958 11:05:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61503 ]] 00:22:01.958 11:05:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61503 00:22:01.958 Process with pid 61503 is not found 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61503 ']' 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61503 00:22:01.958 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61503) - No such process 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61503 is not found' 00:22:01.958 11:05:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61520 ]] 00:22:01.958 11:05:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61520 00:22:01.958 Process with pid 61520 is not found 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61520 ']' 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61520 00:22:01.958 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61520) - No such process 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61520 is not found' 00:22:01.958 11:05:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:22:01.958 ************************************ 00:22:01.958 END TEST cpu_locks 00:22:01.958 ************************************ 00:22:01.958 00:22:01.958 real 0m19.569s 00:22:01.958 user 0m34.122s 00:22:01.958 sys 0m6.039s 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.958 11:05:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:02.218 ************************************ 00:22:02.218 END TEST event 00:22:02.218 ************************************ 00:22:02.218 00:22:02.218 real 0m48.353s 00:22:02.218 user 1m33.949s 00:22:02.218 sys 0m10.700s 00:22:02.218 11:05:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.218 11:05:26 event -- common/autotest_common.sh@10 -- # set +x 00:22:02.218 11:05:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:22:02.218 11:05:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:02.218 11:05:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.218 11:05:26 -- common/autotest_common.sh@10 -- # set +x 00:22:02.218 ************************************ 00:22:02.218 START TEST thread 00:22:02.218 ************************************ 00:22:02.218 11:05:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:22:02.218 * Looking for test storage... 00:22:02.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:22:02.218 11:05:26 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:02.218 11:05:26 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:22:02.218 11:05:26 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:02.477 11:05:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.477 11:05:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.477 11:05:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.477 11:05:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.477 11:05:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.477 11:05:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.477 11:05:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.477 11:05:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.477 11:05:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.477 11:05:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.477 11:05:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.477 11:05:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:22:02.477 11:05:26 thread -- scripts/common.sh@345 -- # : 1 00:22:02.477 11:05:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.477 11:05:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.477 11:05:26 thread -- scripts/common.sh@365 -- # decimal 1 00:22:02.477 11:05:26 thread -- scripts/common.sh@353 -- # local d=1 00:22:02.477 11:05:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.477 11:05:26 thread -- scripts/common.sh@355 -- # echo 1 00:22:02.477 11:05:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.477 11:05:26 thread -- scripts/common.sh@366 -- # decimal 2 00:22:02.477 11:05:26 thread -- scripts/common.sh@353 -- # local d=2 00:22:02.477 11:05:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.477 11:05:26 thread -- scripts/common.sh@355 -- # echo 2 00:22:02.477 11:05:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.477 11:05:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.477 11:05:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.477 11:05:26 thread -- scripts/common.sh@368 -- # return 0 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.477 --rc genhtml_branch_coverage=1 00:22:02.477 --rc genhtml_function_coverage=1 00:22:02.477 --rc genhtml_legend=1 00:22:02.477 --rc geninfo_all_blocks=1 00:22:02.477 --rc geninfo_unexecuted_blocks=1 00:22:02.477 00:22:02.477 ' 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.477 --rc genhtml_branch_coverage=1 00:22:02.477 --rc genhtml_function_coverage=1 00:22:02.477 --rc genhtml_legend=1 00:22:02.477 --rc geninfo_all_blocks=1 00:22:02.477 --rc geninfo_unexecuted_blocks=1 00:22:02.477 00:22:02.477 ' 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.477 --rc genhtml_branch_coverage=1 00:22:02.477 --rc genhtml_function_coverage=1 00:22:02.477 --rc genhtml_legend=1 00:22:02.477 --rc geninfo_all_blocks=1 00:22:02.477 --rc geninfo_unexecuted_blocks=1 00:22:02.477 00:22:02.477 ' 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:02.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.477 --rc genhtml_branch_coverage=1 00:22:02.477 --rc genhtml_function_coverage=1 00:22:02.477 --rc genhtml_legend=1 00:22:02.477 --rc geninfo_all_blocks=1 00:22:02.477 --rc geninfo_unexecuted_blocks=1 00:22:02.477 00:22:02.477 ' 00:22:02.477 11:05:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.477 11:05:26 thread -- common/autotest_common.sh@10 -- # set +x 00:22:02.477 ************************************ 00:22:02.477 START TEST thread_poller_perf 00:22:02.477 ************************************ 00:22:02.477 11:05:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:22:02.477 [2024-12-05 11:05:26.920812] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:22:02.477 [2024-12-05 11:05:26.921007] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61676 ] 00:22:02.477 [2024-12-05 11:05:27.070019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.736 [2024-12-05 11:05:27.134892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.736 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:22:03.672 [2024-12-05T11:05:28.324Z] ====================================== 00:22:03.672 [2024-12-05T11:05:28.324Z] busy:2108428002 (cyc) 00:22:03.672 [2024-12-05T11:05:28.324Z] total_run_count: 366000 00:22:03.672 [2024-12-05T11:05:28.324Z] tsc_hz: 2100000000 (cyc) 00:22:03.672 [2024-12-05T11:05:28.324Z] ====================================== 00:22:03.672 [2024-12-05T11:05:28.324Z] poller_cost: 5760 (cyc), 2742 (nsec) 00:22:03.672 00:22:03.672 real 0m1.284s 00:22:03.672 ************************************ 00:22:03.672 END TEST thread_poller_perf 00:22:03.672 ************************************ 00:22:03.672 user 0m1.137s 00:22:03.672 sys 0m0.040s 00:22:03.672 11:05:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.672 11:05:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:22:03.672 11:05:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:22:03.672 11:05:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:22:03.672 11:05:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.672 11:05:28 thread -- common/autotest_common.sh@10 -- # set +x 00:22:03.672 ************************************ 00:22:03.672 START TEST thread_poller_perf 00:22:03.672 ************************************ 00:22:03.672 11:05:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:22:03.672 [2024-12-05 11:05:28.266868] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:22:03.672 [2024-12-05 11:05:28.267130] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61712 ] 00:22:03.931 [2024-12-05 11:05:28.407804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.931 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:22:03.931 [2024-12-05 11:05:28.462250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.959 [2024-12-05T11:05:29.611Z] ====================================== 00:22:04.959 [2024-12-05T11:05:29.611Z] busy:2101699626 (cyc) 00:22:04.959 [2024-12-05T11:05:29.611Z] total_run_count: 5149000 00:22:04.959 [2024-12-05T11:05:29.611Z] tsc_hz: 2100000000 (cyc) 00:22:04.959 [2024-12-05T11:05:29.611Z] ====================================== 00:22:04.959 [2024-12-05T11:05:29.611Z] poller_cost: 408 (cyc), 194 (nsec) 00:22:04.959 00:22:04.959 real 0m1.262s 00:22:04.959 user 0m1.109s 00:22:04.959 sys 0m0.047s 00:22:04.959 11:05:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.959 11:05:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:22:04.959 ************************************ 00:22:04.959 END TEST thread_poller_perf 00:22:04.959 ************************************ 00:22:04.959 11:05:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:22:04.959 ************************************ 00:22:04.959 END TEST thread 00:22:04.959 ************************************ 00:22:04.959 00:22:04.959 real 0m2.841s 00:22:04.959 user 0m2.390s 00:22:04.959 sys 0m0.243s 00:22:04.959 11:05:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.959 11:05:29 thread -- common/autotest_common.sh@10 -- # set +x 00:22:04.959 11:05:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:22:04.959 11:05:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:22:04.959 11:05:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:04.959 11:05:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.959 11:05:29 -- common/autotest_common.sh@10 -- # set +x 00:22:05.218 ************************************ 00:22:05.218 START TEST app_cmdline 00:22:05.218 ************************************ 00:22:05.218 11:05:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:22:05.218 * Looking for test storage... 00:22:05.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:22:05.218 11:05:29 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:05.218 11:05:29 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:22:05.218 11:05:29 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:05.218 11:05:29 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:22:05.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.218 11:05:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.219 11:05:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.219 11:05:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.219 11:05:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:05.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.219 --rc genhtml_branch_coverage=1 00:22:05.219 --rc genhtml_function_coverage=1 00:22:05.219 --rc genhtml_legend=1 00:22:05.219 --rc geninfo_all_blocks=1 00:22:05.219 --rc geninfo_unexecuted_blocks=1 00:22:05.219 00:22:05.219 ' 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:05.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.219 --rc genhtml_branch_coverage=1 00:22:05.219 --rc genhtml_function_coverage=1 00:22:05.219 --rc genhtml_legend=1 00:22:05.219 --rc geninfo_all_blocks=1 00:22:05.219 --rc geninfo_unexecuted_blocks=1 00:22:05.219 00:22:05.219 ' 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:05.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.219 --rc genhtml_branch_coverage=1 00:22:05.219 --rc genhtml_function_coverage=1 00:22:05.219 --rc genhtml_legend=1 00:22:05.219 --rc geninfo_all_blocks=1 00:22:05.219 --rc geninfo_unexecuted_blocks=1 00:22:05.219 00:22:05.219 ' 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:05.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.219 --rc genhtml_branch_coverage=1 00:22:05.219 --rc genhtml_function_coverage=1 00:22:05.219 --rc genhtml_legend=1 00:22:05.219 --rc geninfo_all_blocks=1 00:22:05.219 --rc geninfo_unexecuted_blocks=1 00:22:05.219 00:22:05.219 ' 00:22:05.219 11:05:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:22:05.219 11:05:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61793 00:22:05.219 11:05:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61793 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61793 ']' 00:22:05.219 11:05:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.219 11:05:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:22:05.219 [2024-12-05 11:05:29.868714] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:22:05.219 [2024-12-05 11:05:29.868934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61793 ] 00:22:05.478 [2024-12-05 11:05:30.009420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.478 [2024-12-05 11:05:30.060439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.736 11:05:30 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.736 11:05:30 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:22:05.736 11:05:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:22:05.996 { 00:22:05.996 "fields": { 00:22:05.996 "commit": "688351e0e", 00:22:05.996 "major": 25, 00:22:05.996 "minor": 1, 00:22:05.996 "patch": 0, 00:22:05.996 "suffix": "-pre" 00:22:05.996 }, 00:22:05.996 "version": "SPDK v25.01-pre git sha1 688351e0e" 00:22:05.996 } 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:22:05.996 11:05:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:05.996 11:05:30 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:22:06.255 2024/12/05 11:05:30 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:22:06.255 request: 00:22:06.255 { 00:22:06.255 "method": "env_dpdk_get_mem_stats", 00:22:06.255 "params": {} 00:22:06.255 } 00:22:06.255 Got JSON-RPC error response 00:22:06.255 GoRPCClient: error on JSON-RPC call 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.255 11:05:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61793 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61793 ']' 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61793 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.255 11:05:30 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61793 00:22:06.514 11:05:30 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.514 killing process with pid 61793 00:22:06.514 11:05:30 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.514 11:05:30 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61793' 00:22:06.514 11:05:30 app_cmdline -- common/autotest_common.sh@973 -- # kill 61793 00:22:06.514 11:05:30 app_cmdline -- common/autotest_common.sh@978 -- # wait 61793 00:22:06.773 00:22:06.773 real 0m1.645s 00:22:06.773 user 0m2.021s 00:22:06.773 sys 0m0.489s 00:22:06.773 11:05:31 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.773 11:05:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:22:06.773 ************************************ 00:22:06.773 END TEST app_cmdline 00:22:06.773 ************************************ 00:22:06.773 11:05:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:22:06.773 11:05:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:06.773 11:05:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.773 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:06.773 ************************************ 00:22:06.773 START TEST version 00:22:06.773 ************************************ 00:22:06.773 11:05:31 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:22:06.773 * Looking for test storage... 00:22:06.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:22:06.773 11:05:31 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:06.773 11:05:31 version -- common/autotest_common.sh@1711 -- # lcov --version 00:22:06.773 11:05:31 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.050 11:05:31 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.050 11:05:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.050 11:05:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.050 11:05:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.050 11:05:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.050 11:05:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.050 11:05:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.050 11:05:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.050 11:05:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.050 11:05:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.050 11:05:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.050 11:05:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.050 11:05:31 version -- scripts/common.sh@344 -- # case "$op" in 00:22:07.050 11:05:31 version -- scripts/common.sh@345 -- # : 1 00:22:07.050 11:05:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.050 11:05:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.050 11:05:31 version -- scripts/common.sh@365 -- # decimal 1 00:22:07.050 11:05:31 version -- scripts/common.sh@353 -- # local d=1 00:22:07.050 11:05:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.050 11:05:31 version -- scripts/common.sh@355 -- # echo 1 00:22:07.050 11:05:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.050 11:05:31 version -- scripts/common.sh@366 -- # decimal 2 00:22:07.050 11:05:31 version -- scripts/common.sh@353 -- # local d=2 00:22:07.050 11:05:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.050 11:05:31 version -- scripts/common.sh@355 -- # echo 2 00:22:07.050 11:05:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.050 11:05:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.050 11:05:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.050 11:05:31 version -- scripts/common.sh@368 -- # return 0 00:22:07.050 11:05:31 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.050 11:05:31 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.050 --rc genhtml_branch_coverage=1 00:22:07.050 --rc genhtml_function_coverage=1 00:22:07.050 --rc genhtml_legend=1 00:22:07.050 --rc geninfo_all_blocks=1 00:22:07.050 --rc geninfo_unexecuted_blocks=1 00:22:07.050 00:22:07.050 ' 00:22:07.050 11:05:31 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.050 --rc genhtml_branch_coverage=1 00:22:07.050 --rc genhtml_function_coverage=1 00:22:07.050 --rc genhtml_legend=1 00:22:07.050 --rc geninfo_all_blocks=1 00:22:07.050 --rc geninfo_unexecuted_blocks=1 00:22:07.050 00:22:07.050 ' 00:22:07.050 11:05:31 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.050 --rc genhtml_branch_coverage=1 00:22:07.050 --rc genhtml_function_coverage=1 00:22:07.050 --rc genhtml_legend=1 00:22:07.050 --rc geninfo_all_blocks=1 00:22:07.050 --rc geninfo_unexecuted_blocks=1 00:22:07.050 00:22:07.050 ' 00:22:07.050 11:05:31 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.050 --rc genhtml_branch_coverage=1 00:22:07.050 --rc genhtml_function_coverage=1 00:22:07.050 --rc genhtml_legend=1 00:22:07.050 --rc geninfo_all_blocks=1 00:22:07.050 --rc geninfo_unexecuted_blocks=1 00:22:07.050 00:22:07.050 ' 00:22:07.050 11:05:31 version -- app/version.sh@17 -- # get_header_version major 00:22:07.050 11:05:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # cut -f2 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # tr -d '"' 00:22:07.050 11:05:31 version -- app/version.sh@17 -- # major=25 00:22:07.050 11:05:31 version -- app/version.sh@18 -- # get_header_version minor 00:22:07.050 11:05:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # tr -d '"' 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # cut -f2 00:22:07.050 11:05:31 version -- app/version.sh@18 -- # minor=1 00:22:07.050 11:05:31 version -- app/version.sh@19 -- # get_header_version patch 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # cut -f2 00:22:07.050 11:05:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # tr -d '"' 00:22:07.050 11:05:31 version -- app/version.sh@19 -- # patch=0 00:22:07.050 11:05:31 version -- app/version.sh@20 -- # get_header_version suffix 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # cut -f2 00:22:07.050 11:05:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:22:07.050 11:05:31 version -- app/version.sh@14 -- # tr -d '"' 00:22:07.050 11:05:31 version -- app/version.sh@20 -- # suffix=-pre 00:22:07.050 11:05:31 version -- app/version.sh@22 -- # version=25.1 00:22:07.050 11:05:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:22:07.050 11:05:31 version -- app/version.sh@28 -- # version=25.1rc0 00:22:07.050 11:05:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:22:07.050 11:05:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:22:07.050 11:05:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:22:07.050 11:05:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:22:07.050 00:22:07.050 real 0m0.294s 00:22:07.050 user 0m0.179s 00:22:07.050 sys 0m0.159s 00:22:07.050 11:05:31 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.050 11:05:31 version -- common/autotest_common.sh@10 -- # set +x 00:22:07.050 ************************************ 00:22:07.050 END TEST version 00:22:07.050 ************************************ 00:22:07.050 11:05:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:22:07.050 11:05:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:22:07.050 11:05:31 -- spdk/autotest.sh@194 -- # uname -s 00:22:07.050 11:05:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:07.050 11:05:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:07.050 11:05:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:07.050 11:05:31 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:07.050 11:05:31 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:07.050 11:05:31 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:07.050 11:05:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.050 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:07.310 11:05:31 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:07.310 11:05:31 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:07.310 11:05:31 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:22:07.310 11:05:31 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:22:07.310 11:05:31 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:22:07.310 11:05:31 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:22:07.310 11:05:31 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:22:07.310 11:05:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.310 11:05:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.310 11:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:07.310 ************************************ 00:22:07.310 START TEST nvmf_tcp 00:22:07.310 ************************************ 00:22:07.310 11:05:31 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:22:07.310 * Looking for test storage... 00:22:07.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:07.310 11:05:31 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.310 11:05:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.310 11:05:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.310 11:05:31 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.310 11:05:31 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.311 11:05:31 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.311 --rc genhtml_branch_coverage=1 00:22:07.311 --rc genhtml_function_coverage=1 00:22:07.311 --rc genhtml_legend=1 00:22:07.311 --rc geninfo_all_blocks=1 00:22:07.311 --rc geninfo_unexecuted_blocks=1 00:22:07.311 00:22:07.311 ' 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.311 --rc genhtml_branch_coverage=1 00:22:07.311 --rc genhtml_function_coverage=1 00:22:07.311 --rc genhtml_legend=1 00:22:07.311 --rc geninfo_all_blocks=1 00:22:07.311 --rc geninfo_unexecuted_blocks=1 00:22:07.311 00:22:07.311 ' 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.311 --rc genhtml_branch_coverage=1 00:22:07.311 --rc genhtml_function_coverage=1 00:22:07.311 --rc genhtml_legend=1 00:22:07.311 --rc geninfo_all_blocks=1 00:22:07.311 --rc geninfo_unexecuted_blocks=1 00:22:07.311 00:22:07.311 ' 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.311 --rc genhtml_branch_coverage=1 00:22:07.311 --rc genhtml_function_coverage=1 00:22:07.311 --rc genhtml_legend=1 00:22:07.311 --rc geninfo_all_blocks=1 00:22:07.311 --rc geninfo_unexecuted_blocks=1 00:22:07.311 00:22:07.311 ' 00:22:07.311 11:05:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:22:07.311 11:05:31 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:22:07.311 11:05:31 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.311 11:05:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.311 ************************************ 00:22:07.311 START TEST nvmf_target_core 00:22:07.311 ************************************ 00:22:07.311 11:05:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:22:07.571 * Looking for test storage... 00:22:07.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.571 --rc genhtml_branch_coverage=1 00:22:07.571 --rc genhtml_function_coverage=1 00:22:07.571 --rc genhtml_legend=1 00:22:07.571 --rc geninfo_all_blocks=1 00:22:07.571 --rc geninfo_unexecuted_blocks=1 00:22:07.571 00:22:07.571 ' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.571 --rc genhtml_branch_coverage=1 00:22:07.571 --rc genhtml_function_coverage=1 00:22:07.571 --rc genhtml_legend=1 00:22:07.571 --rc geninfo_all_blocks=1 00:22:07.571 --rc geninfo_unexecuted_blocks=1 00:22:07.571 00:22:07.571 ' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.571 --rc genhtml_branch_coverage=1 00:22:07.571 --rc genhtml_function_coverage=1 00:22:07.571 --rc genhtml_legend=1 00:22:07.571 --rc geninfo_all_blocks=1 00:22:07.571 --rc geninfo_unexecuted_blocks=1 00:22:07.571 00:22:07.571 ' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.571 --rc genhtml_branch_coverage=1 00:22:07.571 --rc genhtml_function_coverage=1 00:22:07.571 --rc genhtml_legend=1 00:22:07.571 --rc geninfo_all_blocks=1 00:22:07.571 --rc geninfo_unexecuted_blocks=1 00:22:07.571 00:22:07.571 ' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.571 11:05:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:07.572 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:07.572 ************************************ 00:22:07.572 START TEST nvmf_abort 00:22:07.572 ************************************ 00:22:07.572 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:22:07.831 * Looking for test storage... 00:22:07.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.831 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:07.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.832 --rc genhtml_branch_coverage=1 00:22:07.832 --rc genhtml_function_coverage=1 00:22:07.832 --rc genhtml_legend=1 00:22:07.832 --rc geninfo_all_blocks=1 00:22:07.832 --rc geninfo_unexecuted_blocks=1 00:22:07.832 00:22:07.832 ' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:07.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.832 --rc genhtml_branch_coverage=1 00:22:07.832 --rc genhtml_function_coverage=1 00:22:07.832 --rc genhtml_legend=1 00:22:07.832 --rc geninfo_all_blocks=1 00:22:07.832 --rc geninfo_unexecuted_blocks=1 00:22:07.832 00:22:07.832 ' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:07.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.832 --rc genhtml_branch_coverage=1 00:22:07.832 --rc genhtml_function_coverage=1 00:22:07.832 --rc genhtml_legend=1 00:22:07.832 --rc geninfo_all_blocks=1 00:22:07.832 --rc geninfo_unexecuted_blocks=1 00:22:07.832 00:22:07.832 ' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:07.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.832 --rc genhtml_branch_coverage=1 00:22:07.832 --rc genhtml_function_coverage=1 00:22:07.832 --rc genhtml_legend=1 00:22:07.832 --rc geninfo_all_blocks=1 00:22:07.832 --rc geninfo_unexecuted_blocks=1 00:22:07.832 00:22:07.832 ' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:07.832 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@280 -- # nvmf_veth_init 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@223 -- # create_target_ns 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:07.832 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # create_main_bridge 00:22:07.833 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@105 -- # delete_main_bridge 00:22:07.833 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:07.833 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:22:07.833 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator0 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target0 00:22:08.091 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0 up 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target0_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target0 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:08.092 10.0.0.1 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:08.092 10.0.0.2 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator0 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target0_br 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:22:08.092 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator1 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target1 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1 up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target1 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772163 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:22:08.352 10.0.0.3 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772164 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:22:08.352 10.0.0.4 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator1 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target1_br 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:22:08.352 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 2 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:08.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:08.353 00:22:08.353 --- 10.0.0.1 ping statistics --- 00:22:08.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.353 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:08.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:22:08.353 00:22:08.353 --- 10.0.0.2 ping statistics --- 00:22:08.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.353 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:22:08.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:08.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:22:08.353 00:22:08.353 --- 10.0.0.3 ping statistics --- 00:22:08.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.353 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:22:08.353 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:08.353 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:22:08.353 00:22:08.353 --- 10.0.0.4 ping statistics --- 00:22:08.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.353 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:08.353 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # return 0 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:08.354 11:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.354 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:08.354 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:22:08.613 ' 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:08.613 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:08.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=62226 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 62226 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62226 ']' 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.614 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:08.614 [2024-12-05 11:05:33.148573] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:22:08.614 [2024-12-05 11:05:33.148694] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.872 [2024-12-05 11:05:33.309978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:08.872 [2024-12-05 11:05:33.375452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.872 [2024-12-05 11:05:33.375710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.872 [2024-12-05 11:05:33.375864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.872 [2024-12-05 11:05:33.375941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.872 [2024-12-05 11:05:33.375984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.872 [2024-12-05 11:05:33.377148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.872 [2024-12-05 11:05:33.377251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.872 [2024-12-05 11:05:33.377254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.872 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.872 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:22:08.872 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:08.872 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.872 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 [2024-12-05 11:05:33.551723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 Malloc0 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 Delay0 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 [2024-12-05 11:05:33.627957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.130 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:22:09.399 [2024-12-05 11:05:33.828013] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:11.307 Initializing NVMe Controllers 00:22:11.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:22:11.307 controller IO queue size 128 less than required 00:22:11.307 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:22:11.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:22:11.307 Initialization complete. Launching workers. 00:22:11.307 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30088 00:22:11.307 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30149, failed to submit 62 00:22:11.307 success 30092, unsuccessful 57, failed 0 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:11.307 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:11.307 rmmod nvme_tcp 00:22:11.307 rmmod nvme_fabrics 00:22:11.565 rmmod nvme_keyring 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 62226 ']' 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 62226 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62226 ']' 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62226 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.565 11:05:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62226 00:22:11.565 killing process with pid 62226 00:22:11.565 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:11.565 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:11.565 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62226' 00:22:11.565 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62226 00:22:11.565 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62226 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:11.824 00:22:11.824 real 0m4.190s 00:22:11.824 user 0m10.515s 00:22:11.824 sys 0m1.337s 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:22:11.824 ************************************ 00:22:11.824 END TEST nvmf_abort 00:22:11.824 ************************************ 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:11.824 ************************************ 00:22:11.824 START TEST nvmf_ns_hotplug_stress 00:22:11.824 ************************************ 00:22:11.824 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:22:12.084 * Looking for test storage... 00:22:12.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:12.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.084 --rc genhtml_branch_coverage=1 00:22:12.084 --rc genhtml_function_coverage=1 00:22:12.084 --rc genhtml_legend=1 00:22:12.084 --rc geninfo_all_blocks=1 00:22:12.084 --rc geninfo_unexecuted_blocks=1 00:22:12.084 00:22:12.084 ' 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:12.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.084 --rc genhtml_branch_coverage=1 00:22:12.084 --rc genhtml_function_coverage=1 00:22:12.084 --rc genhtml_legend=1 00:22:12.084 --rc geninfo_all_blocks=1 00:22:12.084 --rc geninfo_unexecuted_blocks=1 00:22:12.084 00:22:12.084 ' 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:12.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.084 --rc genhtml_branch_coverage=1 00:22:12.084 --rc genhtml_function_coverage=1 00:22:12.084 --rc genhtml_legend=1 00:22:12.084 --rc geninfo_all_blocks=1 00:22:12.084 --rc geninfo_unexecuted_blocks=1 00:22:12.084 00:22:12.084 ' 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:12.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.084 --rc genhtml_branch_coverage=1 00:22:12.084 --rc genhtml_function_coverage=1 00:22:12.084 --rc genhtml_legend=1 00:22:12.084 --rc geninfo_all_blocks=1 00:22:12.084 --rc geninfo_unexecuted_blocks=1 00:22:12.084 00:22:12.084 ' 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.084 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:12.085 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@280 -- # nvmf_veth_init 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@223 -- # create_target_ns 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # create_main_bridge 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@105 -- # delete_main_bridge 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator0 00:22:12.085 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target0 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0 up 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target0_br 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target0 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:22:12.086 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:22:12.346 10.0.0.1 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:22:12.346 10.0.0.2 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator0 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target0_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator1 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:22:12.346 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1 up 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target1_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772163 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:22:12.347 10.0.0.3 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772164 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:22:12.347 10.0.0.4 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target1_br 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 2 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:12.347 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:12.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:12.347 00:22:12.347 --- 10.0.0.1 ping statistics --- 00:22:12.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.347 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:12.348 11:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:12.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:22:12.607 00:22:12.607 --- 10.0.0.2 ping statistics --- 00:22:12.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.607 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:12.607 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:22:12.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:12.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:22:12.608 00:22:12.608 --- 10.0.0.3 ping statistics --- 00:22:12.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.608 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:22:12.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:12.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:22:12.608 00:22:12.608 --- 10.0.0.4 ping statistics --- 00:22:12.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.608 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # return 0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:22:12.608 ' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=62505 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 62505 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62505 ']' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.608 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:12.608 [2024-12-05 11:05:37.216921] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:22:12.608 [2024-12-05 11:05:37.217031] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.868 [2024-12-05 11:05:37.371959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:12.868 [2024-12-05 11:05:37.426473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.868 [2024-12-05 11:05:37.426675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.868 [2024-12-05 11:05:37.426729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.868 [2024-12-05 11:05:37.426791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.868 [2024-12-05 11:05:37.426862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.868 [2024-12-05 11:05:37.427922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.868 [2024-12-05 11:05:37.428008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.868 [2024-12-05 11:05:37.428008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:22:13.126 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:13.384 [2024-12-05 11:05:37.868860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.384 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:13.643 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.902 [2024-12-05 11:05:38.346384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.902 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:14.160 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:22:14.419 Malloc0 00:22:14.419 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:14.677 Delay0 00:22:14.677 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:14.935 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:22:14.935 NULL1 00:22:14.935 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:22:15.194 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62626 00:22:15.194 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:22:15.194 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:15.194 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:15.452 Read completed with error (sct=0, sc=11) 00:22:15.452 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:15.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:15.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:15.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:15.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:15.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:15.710 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:22:15.710 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:22:15.969 true 00:22:15.969 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:15.969 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:16.905 11:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:16.905 11:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:22:16.905 11:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:22:17.164 true 00:22:17.164 11:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:17.164 11:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:17.440 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:17.698 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:22:17.698 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:22:17.955 true 00:22:17.955 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:17.955 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:18.888 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:18.888 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:22:18.888 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:22:19.146 true 00:22:19.146 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:19.146 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:19.404 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:19.663 11:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:22:19.663 11:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:22:19.922 true 00:22:19.922 11:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:19.922 11:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:20.858 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:20.858 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:22:20.858 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:22:21.116 true 00:22:21.116 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:21.116 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:21.395 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:21.665 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:22:21.665 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:22:21.924 true 00:22:21.924 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:21.924 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:22.862 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:22.862 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:22:22.862 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:22:23.121 true 00:22:23.121 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:23.121 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:23.380 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:23.639 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:22:23.639 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:22:23.898 true 00:22:23.898 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:23.898 11:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:24.836 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:25.095 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:22:25.095 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:22:25.354 true 00:22:25.354 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:25.354 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:25.354 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:25.612 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:22:25.612 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:22:25.870 true 00:22:25.870 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:25.870 11:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:26.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:26.805 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:27.063 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:22:27.063 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:22:27.321 true 00:22:27.321 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:27.321 11:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:27.579 11:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:27.837 11:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:22:27.837 11:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:22:28.094 true 00:22:28.094 11:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:28.094 11:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:28.352 11:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:28.610 11:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:22:28.610 11:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:22:28.610 true 00:22:28.610 11:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:28.610 11:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:29.985 11:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:29.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:29.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:29.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:29.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:29.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:29.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:29.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:29.985 11:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:22:29.985 11:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:22:30.243 true 00:22:30.502 11:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:30.502 11:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:31.068 11:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:31.328 11:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:22:31.328 11:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:22:31.636 true 00:22:31.636 11:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:31.636 11:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:31.901 11:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:32.169 11:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:22:32.169 11:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:22:32.437 true 00:22:32.437 11:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:32.437 11:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:32.437 11:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:32.709 11:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:22:32.709 11:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:22:32.981 true 00:22:32.981 11:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:32.981 11:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:34.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:34.403 11:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:34.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:34.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:34.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:34.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:34.403 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:34.403 11:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:22:34.403 11:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:22:34.661 true 00:22:34.661 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:34.661 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:35.597 11:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:35.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:35.597 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:22:35.597 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:22:35.856 true 00:22:35.856 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:35.856 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:36.116 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:36.375 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:22:36.375 11:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:22:36.375 true 00:22:36.633 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:36.633 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:37.568 11:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:37.827 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:22:37.827 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:22:37.827 true 00:22:37.827 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:37.827 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:38.085 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:38.344 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:22:38.344 11:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:22:38.601 true 00:22:38.859 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:38.859 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:38.859 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:39.117 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:22:39.117 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:22:39.376 true 00:22:39.376 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:39.376 11:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:40.312 11:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:40.888 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:22:40.888 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:22:40.888 true 00:22:40.888 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:40.888 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:41.146 11:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:41.404 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:22:41.404 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:22:41.662 true 00:22:41.662 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:41.662 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:41.921 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:42.179 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:22:42.179 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:22:42.438 true 00:22:42.438 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:42.438 11:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:43.375 11:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:43.635 11:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:22:43.635 11:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:22:43.893 true 00:22:43.893 11:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:43.893 11:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:44.153 11:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:44.411 11:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:22:44.411 11:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:22:44.670 true 00:22:44.670 11:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:44.670 11:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:44.930 11:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:45.188 11:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:22:45.189 11:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:22:45.189 true 00:22:45.447 11:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:45.447 11:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:46.383 Initializing NVMe Controllers 00:22:46.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.383 Controller IO queue size 128, less than required. 00:22:46.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.383 Controller IO queue size 128, less than required. 00:22:46.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.383 Initialization complete. Launching workers. 00:22:46.383 ======================================================== 00:22:46.383 Latency(us) 00:22:46.383 Device Information : IOPS MiB/s Average min max 00:22:46.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 740.83 0.36 88700.13 2500.12 1023433.22 00:22:46.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11842.20 5.78 10808.58 2831.26 486754.27 00:22:46.383 ======================================================== 00:22:46.383 Total : 12583.03 6.14 15394.49 2500.12 1023433.22 00:22:46.383 00:22:46.383 11:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:46.642 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:22:46.642 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:22:46.900 true 00:22:46.900 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62626 00:22:46.900 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62626) - No such process 00:22:46.900 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62626 00:22:46.900 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:47.160 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:47.418 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:22:47.418 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:22:47.418 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:22:47.418 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:47.418 11:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:22:47.418 null0 00:22:47.418 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:47.418 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:47.418 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:22:47.675 null1 00:22:47.675 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:47.675 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:47.675 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:22:47.933 null2 00:22:47.933 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:47.933 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:47.933 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:22:48.190 null3 00:22:48.190 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:48.190 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:48.190 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:22:48.447 null4 00:22:48.447 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:48.447 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:48.447 11:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:22:48.705 null5 00:22:48.705 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:48.705 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:48.705 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:22:48.705 null6 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:22:48.964 null7 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63710 63711 63714 63715 63716 63718 63720 63724 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:48.964 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:49.221 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:49.221 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:49.477 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:49.477 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:49.477 11:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:49.477 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:49.477 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:49.477 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.734 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:49.993 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:50.250 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.251 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.251 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.511 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:50.511 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:50.774 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:50.774 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:50.774 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:50.775 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.032 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.033 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:51.033 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.289 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:51.547 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:51.547 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:51.805 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:51.806 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.806 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.806 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:51.806 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:51.806 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:51.806 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:52.064 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.321 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.322 11:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.580 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:52.838 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:53.096 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.355 11:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:53.613 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:53.871 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:54.165 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.466 11:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:54.466 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.466 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.466 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.466 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.466 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:54.466 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.723 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:54.980 rmmod nvme_tcp 00:22:54.980 rmmod nvme_fabrics 00:22:54.980 rmmod nvme_keyring 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 62505 ']' 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 62505 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62505 ']' 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62505 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62505 00:22:54.980 killing process with pid 62505 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62505' 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62505 00:22:54.980 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62505 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:22:55.238 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:55.496 ************************************ 00:22:55.496 END TEST nvmf_ns_hotplug_stress 00:22:55.496 ************************************ 00:22:55.496 00:22:55.496 real 0m43.521s 00:22:55.496 user 3m25.624s 00:22:55.496 sys 0m17.245s 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.496 11:06:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:22:55.496 ************************************ 00:22:55.496 START TEST nvmf_delete_subsystem 00:22:55.496 ************************************ 00:22:55.496 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:22:55.496 * Looking for test storage... 00:22:55.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:55.496 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:55.496 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:22:55.496 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:55.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.756 --rc genhtml_branch_coverage=1 00:22:55.756 --rc genhtml_function_coverage=1 00:22:55.756 --rc genhtml_legend=1 00:22:55.756 --rc geninfo_all_blocks=1 00:22:55.756 --rc geninfo_unexecuted_blocks=1 00:22:55.756 00:22:55.756 ' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:55.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.756 --rc genhtml_branch_coverage=1 00:22:55.756 --rc genhtml_function_coverage=1 00:22:55.756 --rc genhtml_legend=1 00:22:55.756 --rc geninfo_all_blocks=1 00:22:55.756 --rc geninfo_unexecuted_blocks=1 00:22:55.756 00:22:55.756 ' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:55.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.756 --rc genhtml_branch_coverage=1 00:22:55.756 --rc genhtml_function_coverage=1 00:22:55.756 --rc genhtml_legend=1 00:22:55.756 --rc geninfo_all_blocks=1 00:22:55.756 --rc geninfo_unexecuted_blocks=1 00:22:55.756 00:22:55.756 ' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:55.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.756 --rc genhtml_branch_coverage=1 00:22:55.756 --rc genhtml_function_coverage=1 00:22:55.756 --rc genhtml_legend=1 00:22:55.756 --rc geninfo_all_blocks=1 00:22:55.756 --rc geninfo_unexecuted_blocks=1 00:22:55.756 00:22:55.756 ' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.756 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:55.757 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@280 -- # nvmf_veth_init 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@223 -- # create_target_ns 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # create_main_bridge 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@105 -- # delete_main_bridge 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator0 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:55.757 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target0 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0 up 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target0_br 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target0 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:22:55.758 10.0.0.1 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:22:55.758 10.0.0.2 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator0 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:55.758 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target0_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator1 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target1 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1 up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target1_br 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target1 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772163 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:22:56.018 10.0.0.3 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772164 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:22:56.018 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:22:56.019 10.0.0.4 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target1_br 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 2 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:56.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:22:56.019 00:22:56.019 --- 10.0.0.1 ping statistics --- 00:22:56.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.019 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:56.019 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:56.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:22:56.019 00:22:56.019 --- 10.0.0.2 ping statistics --- 00:22:56.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.020 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:22:56.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:56.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:56.020 00:22:56.020 --- 10.0.0.3 ping statistics --- 00:22:56.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.020 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.020 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:22:56.279 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:56.279 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:22:56.279 00:22:56.279 --- 10.0.0.4 ping statistics --- 00:22:56.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.279 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:56.279 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # return 0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:22:56.280 ' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.280 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=65141 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 65141 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 65141 ']' 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.281 11:06:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.281 [2024-12-05 11:06:20.842575] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:22:56.281 [2024-12-05 11:06:20.842658] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.538 [2024-12-05 11:06:20.997128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:56.538 [2024-12-05 11:06:21.065236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.538 [2024-12-05 11:06:21.065319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.539 [2024-12-05 11:06:21.065343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.539 [2024-12-05 11:06:21.065364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.539 [2024-12-05 11:06:21.065382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.539 [2024-12-05 11:06:21.066635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.539 [2024-12-05 11:06:21.066637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.539 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.539 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:22:56.539 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:56.539 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.539 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.797 [2024-12-05 11:06:21.246408] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.797 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.798 [2024-12-05 11:06:21.268053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.798 NULL1 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.798 Delay0 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65173 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:22:56.798 11:06:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:22:57.056 [2024-12-05 11:06:21.479434] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:58.972 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.972 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.972 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:58.972 Read completed with error (sct=0, sc=8) 00:22:58.972 Read completed with error (sct=0, sc=8) 00:22:58.972 starting I/O failed: -6 00:22:58.972 Read completed with error (sct=0, sc=8) 00:22:58.972 Write completed with error (sct=0, sc=8) 00:22:58.972 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 [2024-12-05 11:06:23.513812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae97e0 is same with the state(6) to be set 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 starting I/O failed: -6 00:22:58.973 Write completed with error (sct=0, sc=8) 00:22:58.973 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 starting I/O failed: -6 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Read completed with error (sct=0, sc=8) 00:22:58.974 Write completed with error (sct=0, sc=8) 00:22:58.974 [2024-12-05 11:06:23.515348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f625c00d4b0 is same with the state(6) to be set 00:22:59.910 [2024-12-05 11:06:24.492565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addaa0 is same with the state(6) to be set 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Write completed with error (sct=0, sc=8) 00:22:59.910 Write completed with error (sct=0, sc=8) 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Write completed with error (sct=0, sc=8) 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Read completed with error (sct=0, sc=8) 00:22:59.910 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 [2024-12-05 11:06:24.512768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f625c00d020 is same with the state(6) to be set 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 [2024-12-05 11:06:24.512934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f625c00d7e0 is same with the state(6) to be set 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 [2024-12-05 11:06:24.515084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae8c30 is same with the state(6) to be set 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Write completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 Read completed with error (sct=0, sc=8) 00:22:59.911 [2024-12-05 11:06:24.516156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f625c000c40 is same with the state(6) to be set 00:22:59.911 Initializing NVMe Controllers 00:22:59.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.911 Controller IO queue size 128, less than required. 00:22:59.911 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:22:59.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:22:59.911 Initialization complete. Launching workers. 00:22:59.911 ======================================================== 00:22:59.911 Latency(us) 00:22:59.911 Device Information : IOPS MiB/s Average min max 00:22:59.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.53 0.07 888050.57 335.61 1010955.07 00:22:59.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 151.53 0.07 1031660.40 3087.85 1998613.05 00:22:59.911 ======================================================== 00:22:59.911 Total : 304.06 0.15 959620.83 335.61 1998613.05 00:22:59.911 00:22:59.911 [2024-12-05 11:06:24.517269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1addaa0 (9): Bad file descriptor 00:22:59.911 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:59.911 11:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.911 11:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:22:59.911 11:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65173 00:22:59.911 11:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65173 00:23:00.478 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65173) - No such process 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65173 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 65173 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 65173 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:00.478 [2024-12-05 11:06:25.046991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65224 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:00.478 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:00.736 [2024-12-05 11:06:25.242427] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:00.993 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:00.993 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:00.993 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:01.567 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:01.567 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:01.567 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:02.142 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:02.142 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:02.142 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:02.708 11:06:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:02.708 11:06:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:02.708 11:06:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:02.967 11:06:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:02.967 11:06:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:02.967 11:06:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:03.534 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:03.534 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:03.534 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:03.792 Initializing NVMe Controllers 00:23:03.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.792 Controller IO queue size 128, less than required. 00:23:03.792 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:03.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:23:03.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:23:03.792 Initialization complete. Launching workers. 00:23:03.792 ======================================================== 00:23:03.792 Latency(us) 00:23:03.792 Device Information : IOPS MiB/s Average min max 00:23:03.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002872.43 1000121.81 1040731.88 00:23:03.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003851.00 1000112.75 1012411.34 00:23:03.792 ======================================================== 00:23:03.792 Total : 256.00 0.12 1003361.72 1000112.75 1040731.88 00:23:03.792 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65224 00:23:04.050 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65224) - No such process 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65224 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:04.050 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:04.050 rmmod nvme_tcp 00:23:04.050 rmmod nvme_fabrics 00:23:04.308 rmmod nvme_keyring 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 65141 ']' 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 65141 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 65141 ']' 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 65141 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65141 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.308 killing process with pid 65141 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65141' 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 65141 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 65141 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:04.308 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:04.566 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:04.566 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:04.566 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:23:04.566 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:23:04.566 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:04.566 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:23:04.566 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:23:04.566 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:04.567 00:23:04.567 real 0m9.136s 00:23:04.567 user 0m26.724s 00:23:04.567 sys 0m2.610s 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:04.567 ************************************ 00:23:04.567 END TEST nvmf_delete_subsystem 00:23:04.567 ************************************ 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:04.567 ************************************ 00:23:04.567 START TEST nvmf_host_management 00:23:04.567 ************************************ 00:23:04.567 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:23:04.826 * Looking for test storage... 00:23:04.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:04.826 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:04.826 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:23:04.826 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:04.826 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.827 --rc genhtml_branch_coverage=1 00:23:04.827 --rc genhtml_function_coverage=1 00:23:04.827 --rc genhtml_legend=1 00:23:04.827 --rc geninfo_all_blocks=1 00:23:04.827 --rc geninfo_unexecuted_blocks=1 00:23:04.827 00:23:04.827 ' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.827 --rc genhtml_branch_coverage=1 00:23:04.827 --rc genhtml_function_coverage=1 00:23:04.827 --rc genhtml_legend=1 00:23:04.827 --rc geninfo_all_blocks=1 00:23:04.827 --rc geninfo_unexecuted_blocks=1 00:23:04.827 00:23:04.827 ' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.827 --rc genhtml_branch_coverage=1 00:23:04.827 --rc genhtml_function_coverage=1 00:23:04.827 --rc genhtml_legend=1 00:23:04.827 --rc geninfo_all_blocks=1 00:23:04.827 --rc geninfo_unexecuted_blocks=1 00:23:04.827 00:23:04.827 ' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:04.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.827 --rc genhtml_branch_coverage=1 00:23:04.827 --rc genhtml_function_coverage=1 00:23:04.827 --rc genhtml_legend=1 00:23:04.827 --rc geninfo_all_blocks=1 00:23:04.827 --rc geninfo_unexecuted_blocks=1 00:23:04.827 00:23:04.827 ' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:04.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:04.827 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@280 -- # nvmf_veth_init 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@223 -- # create_target_ns 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # create_main_bridge 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@105 -- # delete_main_bridge 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator0 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:04.828 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target0 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0 up 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target0_br 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target0 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:23:05.087 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:23:05.088 10.0.0.1 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:23:05.088 10.0.0.2 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator0 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target0_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator1 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target1 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1 up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target1_br 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target1 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772163 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:23:05.088 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:23:05.089 10.0.0.3 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772164 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:23:05.089 10.0.0.4 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator1 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:23:05.089 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target1_br 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 2 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:05.400 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:05.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:23:05.401 00:23:05.401 --- 10.0.0.1 ping statistics --- 00:23:05.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.401 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:05.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:23:05.401 00:23:05.401 --- 10.0.0.2 ping statistics --- 00:23:05.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.401 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:23:05.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:23:05.401 00:23:05.401 --- 10.0.0.3 ping statistics --- 00:23:05.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.401 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:23:05.401 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:05.401 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.097 ms 00:23:05.401 00:23:05.401 --- 10.0.0.4 ping statistics --- 00:23:05.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.401 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # return 0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:05.401 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:23:05.402 ' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=65518 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 65518 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65518 ']' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.402 11:06:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:05.662 [2024-12-05 11:06:30.047903] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:23:05.662 [2024-12-05 11:06:30.048023] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.662 [2024-12-05 11:06:30.211069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.662 [2024-12-05 11:06:30.275787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.662 [2024-12-05 11:06:30.275845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.662 [2024-12-05 11:06:30.275860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.662 [2024-12-05 11:06:30.275873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.662 [2024-12-05 11:06:30.275884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.662 [2024-12-05 11:06:30.276888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.662 [2024-12-05 11:06:30.276971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.662 [2024-12-05 11:06:30.277183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:05.662 [2024-12-05 11:06:30.277186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:05.920 [2024-12-05 11:06:30.455244] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:05.920 Malloc0 00:23:05.920 [2024-12-05 11:06:30.530326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:05.920 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65571 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65571 /var/tmp/bdevperf.sock 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65571 ']' 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:06.178 { 00:23:06.178 "params": { 00:23:06.178 "name": "Nvme$subsystem", 00:23:06.178 "trtype": "$TEST_TRANSPORT", 00:23:06.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.178 "adrfam": "ipv4", 00:23:06.178 "trsvcid": "$NVMF_PORT", 00:23:06.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.178 "hdgst": ${hdgst:-false}, 00:23:06.178 "ddgst": ${ddgst:-false} 00:23:06.178 }, 00:23:06.178 "method": "bdev_nvme_attach_controller" 00:23:06.178 } 00:23:06.178 EOF 00:23:06.178 )") 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:23:06.178 11:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:06.178 "params": { 00:23:06.178 "name": "Nvme0", 00:23:06.178 "trtype": "tcp", 00:23:06.178 "traddr": "10.0.0.2", 00:23:06.178 "adrfam": "ipv4", 00:23:06.178 "trsvcid": "4420", 00:23:06.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:06.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:06.178 "hdgst": false, 00:23:06.178 "ddgst": false 00:23:06.178 }, 00:23:06.178 "method": "bdev_nvme_attach_controller" 00:23:06.178 }' 00:23:06.178 [2024-12-05 11:06:30.646688] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:23:06.178 [2024-12-05 11:06:30.646792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65571 ] 00:23:06.178 [2024-12-05 11:06:30.806035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.437 [2024-12-05 11:06:30.869359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.437 Running I/O for 10 seconds... 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1317 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1317 -ge 100 ']' 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:07.372 [2024-12-05 11:06:31.841120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.372 [2024-12-05 11:06:31.841169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.841183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.372 [2024-12-05 11:06:31.841194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.841206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.372 [2024-12-05 11:06:31.841216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.841227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.372 [2024-12-05 11:06:31.841237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.841248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d4660 is same with the state(6) to be set 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.372 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:07.372 [2024-12-05 11:06:31.847309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.372 [2024-12-05 11:06:31.847759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.372 [2024-12-05 11:06:31.847773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.847979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.847991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.373 [2024-12-05 11:06:31.848646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.373 [2024-12-05 11:06:31.848656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.848827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.374 [2024-12-05 11:06:31.848838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.374 [2024-12-05 11:06:31.849978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:07.374 task offset: 49152 on job bdev=Nvme0n1 fails 00:23:07.374 00:23:07.374 Latency(us) 00:23:07.374 [2024-12-05T11:06:32.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.374 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:07.374 Job: Nvme0n1 ended in about 0.80 seconds with error 00:23:07.374 Verification LBA range: start 0x0 length 0x400 00:23:07.374 Nvme0n1 : 0.80 1754.87 109.68 79.77 0.00 34053.94 1950.48 38198.13 00:23:07.374 [2024-12-05T11:06:32.026Z] =================================================================================================================== 00:23:07.374 [2024-12-05T11:06:32.026Z] Total : 1754.87 109.68 79.77 0.00 34053.94 1950.48 38198.13 00:23:07.374 [2024-12-05 11:06:31.852180] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:07.374 [2024-12-05 11:06:31.852206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d4660 (9): Bad file descriptor 00:23:07.374 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.374 11:06:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:23:07.374 [2024-12-05 11:06:31.862795] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65571 00:23:08.307 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65571) - No such process 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:08.307 { 00:23:08.307 "params": { 00:23:08.307 "name": "Nvme$subsystem", 00:23:08.307 "trtype": "$TEST_TRANSPORT", 00:23:08.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:08.307 "adrfam": "ipv4", 00:23:08.307 "trsvcid": "$NVMF_PORT", 00:23:08.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:08.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:08.307 "hdgst": ${hdgst:-false}, 00:23:08.307 "ddgst": ${ddgst:-false} 00:23:08.307 }, 00:23:08.307 "method": "bdev_nvme_attach_controller" 00:23:08.307 } 00:23:08.307 EOF 00:23:08.307 )") 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:23:08.307 11:06:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:08.307 "params": { 00:23:08.307 "name": "Nvme0", 00:23:08.307 "trtype": "tcp", 00:23:08.307 "traddr": "10.0.0.2", 00:23:08.307 "adrfam": "ipv4", 00:23:08.307 "trsvcid": "4420", 00:23:08.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:08.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:08.307 "hdgst": false, 00:23:08.307 "ddgst": false 00:23:08.307 }, 00:23:08.307 "method": "bdev_nvme_attach_controller" 00:23:08.307 }' 00:23:08.307 [2024-12-05 11:06:32.922476] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:23:08.307 [2024-12-05 11:06:32.922585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65626 ] 00:23:08.563 [2024-12-05 11:06:33.077519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.563 [2024-12-05 11:06:33.129695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.821 Running I/O for 1 seconds... 00:23:09.855 1920.00 IOPS, 120.00 MiB/s 00:23:09.855 Latency(us) 00:23:09.855 [2024-12-05T11:06:34.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.855 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:09.855 Verification LBA range: start 0x0 length 0x400 00:23:09.855 Nvme0n1 : 1.01 1958.13 122.38 0.00 0.00 32113.65 4930.80 29834.48 00:23:09.855 [2024-12-05T11:06:34.507Z] =================================================================================================================== 00:23:09.855 [2024-12-05T11:06:34.507Z] Total : 1958.13 122.38 0.00 0.00 32113.65 4930.80 29834.48 00:23:09.855 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:23:09.855 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:23:09.855 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:23:09.855 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:09.855 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:23:09.855 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:09.855 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:10.114 rmmod nvme_tcp 00:23:10.114 rmmod nvme_fabrics 00:23:10.114 rmmod nvme_keyring 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 65518 ']' 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 65518 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65518 ']' 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65518 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.114 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65518 00:23:10.115 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.115 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.115 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65518' 00:23:10.115 killing process with pid 65518 00:23:10.115 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65518 00:23:10.115 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65518 00:23:10.373 [2024-12-05 11:06:34.792710] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:23:10.373 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:23:10.374 11:06:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:10.642 00:23:10.642 real 0m5.841s 00:23:10.642 user 0m21.002s 00:23:10.642 sys 0m1.747s 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:10.642 ************************************ 00:23:10.642 END TEST nvmf_host_management 00:23:10.642 ************************************ 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:10.642 ************************************ 00:23:10.642 START TEST nvmf_lvol 00:23:10.642 ************************************ 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:10.642 * Looking for test storage... 00:23:10.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.642 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:10.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.903 --rc genhtml_branch_coverage=1 00:23:10.903 --rc genhtml_function_coverage=1 00:23:10.903 --rc genhtml_legend=1 00:23:10.903 --rc geninfo_all_blocks=1 00:23:10.903 --rc geninfo_unexecuted_blocks=1 00:23:10.903 00:23:10.903 ' 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:10.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.903 --rc genhtml_branch_coverage=1 00:23:10.903 --rc genhtml_function_coverage=1 00:23:10.903 --rc genhtml_legend=1 00:23:10.903 --rc geninfo_all_blocks=1 00:23:10.903 --rc geninfo_unexecuted_blocks=1 00:23:10.903 00:23:10.903 ' 00:23:10.903 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:10.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.904 --rc genhtml_branch_coverage=1 00:23:10.904 --rc genhtml_function_coverage=1 00:23:10.904 --rc genhtml_legend=1 00:23:10.904 --rc geninfo_all_blocks=1 00:23:10.904 --rc geninfo_unexecuted_blocks=1 00:23:10.904 00:23:10.904 ' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:10.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.904 --rc genhtml_branch_coverage=1 00:23:10.904 --rc genhtml_function_coverage=1 00:23:10.904 --rc genhtml_legend=1 00:23:10.904 --rc geninfo_all_blocks=1 00:23:10.904 --rc geninfo_unexecuted_blocks=1 00:23:10.904 00:23:10.904 ' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:10.904 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@280 -- # nvmf_veth_init 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@223 -- # create_target_ns 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # create_main_bridge 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@105 -- # delete_main_bridge 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:23:10.904 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0 up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:23:10.905 10.0.0.1 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:10.905 10.0.0.2 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target0_br 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:23:10.905 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator1 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:23:10.906 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target1 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1 up 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target1_br 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target1 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772163 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:23:11.165 10.0.0.3 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.165 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772164 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:23:11.166 10.0.0.4 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target1_br 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 2 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:11.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:23:11.166 00:23:11.166 --- 10.0.0.1 ping statistics --- 00:23:11.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.166 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:11.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:23:11.166 00:23:11.166 --- 10.0.0.2 ping statistics --- 00:23:11.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.166 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:11.166 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:23:11.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:23:11.167 00:23:11.167 --- 10.0.0.3 ping statistics --- 00:23:11.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.167 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:23:11.167 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:11.167 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.132 ms 00:23:11.167 00:23:11.167 --- 10.0.0.4 ping statistics --- 00:23:11.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.167 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # return 0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:11.167 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:23:11.426 ' 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=65892 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 65892 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65892 ']' 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.426 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:11.426 [2024-12-05 11:06:35.908084] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:23:11.426 [2024-12-05 11:06:35.908168] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.427 [2024-12-05 11:06:36.055089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:11.685 [2024-12-05 11:06:36.115869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.685 [2024-12-05 11:06:36.115935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.685 [2024-12-05 11:06:36.115954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.685 [2024-12-05 11:06:36.115971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.685 [2024-12-05 11:06:36.115981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.685 [2024-12-05 11:06:36.116931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.685 [2024-12-05 11:06:36.116988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.685 [2024-12-05 11:06:36.116990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.620 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.620 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:23:12.620 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:12.620 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.620 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:12.620 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.620 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:12.878 [2024-12-05 11:06:37.308183] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.878 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:13.136 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:23:13.136 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:13.394 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:23:13.394 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:23:13.652 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:23:14.218 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e28dad4d-52cf-4fd7-9902-65d7992bb0a4 00:23:14.218 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e28dad4d-52cf-4fd7-9902-65d7992bb0a4 lvol 20 00:23:14.476 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c2eb68e3-934e-48e7-b980-b0a218bc118d 00:23:14.476 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:14.734 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2eb68e3-934e-48e7-b980-b0a218bc118d 00:23:14.991 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:15.249 [2024-12-05 11:06:39.760365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.249 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:15.508 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66045 00:23:15.508 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:23:15.508 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:23:16.444 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c2eb68e3-934e-48e7-b980-b0a218bc118d MY_SNAPSHOT 00:23:17.011 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d9b2abb0-f13d-4758-a25a-ae3b366cca03 00:23:17.011 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c2eb68e3-934e-48e7-b980-b0a218bc118d 30 00:23:17.269 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d9b2abb0-f13d-4758-a25a-ae3b366cca03 MY_CLONE 00:23:17.527 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=56d39ea0-bfbb-4a8f-afa1-b8a378e3d0d1 00:23:17.527 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 56d39ea0-bfbb-4a8f-afa1-b8a378e3d0d1 00:23:18.094 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66045 00:23:26.251 Initializing NVMe Controllers 00:23:26.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:23:26.251 Controller IO queue size 128, less than required. 00:23:26.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:26.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:23:26.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:23:26.251 Initialization complete. Launching workers. 00:23:26.251 ======================================================== 00:23:26.251 Latency(us) 00:23:26.251 Device Information : IOPS MiB/s Average min max 00:23:26.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11229.20 43.86 11407.77 599.24 114525.16 00:23:26.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10741.00 41.96 11923.93 2596.47 49529.37 00:23:26.251 ======================================================== 00:23:26.251 Total : 21970.20 85.82 11660.12 599.24 114525.16 00:23:26.251 00:23:26.251 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.251 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c2eb68e3-934e-48e7-b980-b0a218bc118d 00:23:26.251 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e28dad4d-52cf-4fd7-9902-65d7992bb0a4 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:26.509 rmmod nvme_tcp 00:23:26.509 rmmod nvme_fabrics 00:23:26.509 rmmod nvme_keyring 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:23:26.509 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 65892 ']' 00:23:26.510 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 65892 00:23:26.510 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65892 ']' 00:23:26.510 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65892 00:23:26.510 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:23:26.510 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.510 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65892 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:26.768 killing process with pid 65892 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65892' 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65892 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65892 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:23:26.768 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:23:27.026 00:23:27.026 real 0m16.474s 00:23:27.026 user 1m5.692s 00:23:27.026 sys 0m5.832s 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.026 ************************************ 00:23:27.026 END TEST nvmf_lvol 00:23:27.026 ************************************ 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:23:27.026 ************************************ 00:23:27.026 START TEST nvmf_lvs_grow 00:23:27.026 ************************************ 00:23:27.026 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:27.285 * Looking for test storage... 00:23:27.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.285 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.286 --rc genhtml_branch_coverage=1 00:23:27.286 --rc genhtml_function_coverage=1 00:23:27.286 --rc genhtml_legend=1 00:23:27.286 --rc geninfo_all_blocks=1 00:23:27.286 --rc geninfo_unexecuted_blocks=1 00:23:27.286 00:23:27.286 ' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.286 --rc genhtml_branch_coverage=1 00:23:27.286 --rc genhtml_function_coverage=1 00:23:27.286 --rc genhtml_legend=1 00:23:27.286 --rc geninfo_all_blocks=1 00:23:27.286 --rc geninfo_unexecuted_blocks=1 00:23:27.286 00:23:27.286 ' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.286 --rc genhtml_branch_coverage=1 00:23:27.286 --rc genhtml_function_coverage=1 00:23:27.286 --rc genhtml_legend=1 00:23:27.286 --rc geninfo_all_blocks=1 00:23:27.286 --rc geninfo_unexecuted_blocks=1 00:23:27.286 00:23:27.286 ' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.286 --rc genhtml_branch_coverage=1 00:23:27.286 --rc genhtml_function_coverage=1 00:23:27.286 --rc genhtml_legend=1 00:23:27.286 --rc geninfo_all_blocks=1 00:23:27.286 --rc geninfo_unexecuted_blocks=1 00:23:27.286 00:23:27.286 ' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:27.286 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@280 -- # nvmf_veth_init 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@223 -- # create_target_ns 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # create_main_bridge 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@105 -- # delete_main_bridge 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:23:27.286 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator0 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:23:27.287 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target0 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0 up 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target0_br 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target0 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:27.547 10.0.0.1 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:27.547 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:23:27.547 10.0.0.2 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator0 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target0_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator1 00:23:27.547 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target1 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1 up 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target1 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772163 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:23:27.548 10.0.0.3 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772164 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:23:27.548 10.0.0.4 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator1 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target1_br 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:27.548 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 2 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:27.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:23:27.808 00:23:27.808 --- 10.0.0.1 ping statistics --- 00:23:27.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.808 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:27.808 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:27.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:23:27.809 00:23:27.809 --- 10.0.0.2 ping statistics --- 00:23:27.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.809 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:23:27.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:27.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:23:27.809 00:23:27.809 --- 10.0.0.3 ping statistics --- 00:23:27.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.809 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:23:27.809 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:27.809 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:23:27.809 00:23:27.809 --- 10.0.0.4 ping statistics --- 00:23:27.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.809 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # return 0 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:27.809 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:23:27.810 ' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:27.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=66469 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 66469 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66469 ']' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.810 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:27.810 [2024-12-05 11:06:52.452853] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:23:27.810 [2024-12-05 11:06:52.453161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.069 [2024-12-05 11:06:52.610308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.069 [2024-12-05 11:06:52.674162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.069 [2024-12-05 11:06:52.674412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.069 [2024-12-05 11:06:52.674623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.069 [2024-12-05 11:06:52.674708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.069 [2024-12-05 11:06:52.674815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.069 [2024-12-05 11:06:52.675193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.328 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.328 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:23:28.328 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:28.328 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.328 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:28.328 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.328 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:28.586 [2024-12-05 11:06:53.148037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:28.586 ************************************ 00:23:28.586 START TEST lvs_grow_clean 00:23:28.586 ************************************ 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:28.586 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:29.153 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:29.154 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:29.154 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1e693457-43cd-4359-9d74-64a22bb4c479 00:23:29.154 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:29.154 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:29.720 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:29.721 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:29.721 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1e693457-43cd-4359-9d74-64a22bb4c479 lvol 150 00:23:29.979 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1411b170-7735-437f-b221-5947dc0fec7e 00:23:29.979 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:29.979 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:30.237 [2024-12-05 11:06:54.707596] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:30.237 [2024-12-05 11:06:54.707683] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:30.237 true 00:23:30.237 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:30.237 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:30.495 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:30.495 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:30.752 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1411b170-7735-437f-b221-5947dc0fec7e 00:23:31.009 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:31.275 [2024-12-05 11:06:55.700102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.275 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:31.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66617 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66617 /var/tmp/bdevperf.sock 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66617 ']' 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.534 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:23:31.534 [2024-12-05 11:06:56.102830] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:23:31.534 [2024-12-05 11:06:56.102942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66617 ] 00:23:31.793 [2024-12-05 11:06:56.267509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.793 [2024-12-05 11:06:56.354956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.729 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.729 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:23:32.729 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:32.988 Nvme0n1 00:23:32.988 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:33.247 [ 00:23:33.247 { 00:23:33.247 "aliases": [ 00:23:33.247 "1411b170-7735-437f-b221-5947dc0fec7e" 00:23:33.247 ], 00:23:33.247 "assigned_rate_limits": { 00:23:33.247 "r_mbytes_per_sec": 0, 00:23:33.247 "rw_ios_per_sec": 0, 00:23:33.247 "rw_mbytes_per_sec": 0, 00:23:33.247 "w_mbytes_per_sec": 0 00:23:33.247 }, 00:23:33.247 "block_size": 4096, 00:23:33.248 "claimed": false, 00:23:33.248 "driver_specific": { 00:23:33.248 "mp_policy": "active_passive", 00:23:33.248 "nvme": [ 00:23:33.248 { 00:23:33.248 "ctrlr_data": { 00:23:33.248 "ana_reporting": false, 00:23:33.248 "cntlid": 1, 00:23:33.248 "firmware_revision": "25.01", 00:23:33.248 "model_number": "SPDK bdev Controller", 00:23:33.248 "multi_ctrlr": true, 00:23:33.248 "oacs": { 00:23:33.248 "firmware": 0, 00:23:33.248 "format": 0, 00:23:33.248 "ns_manage": 0, 00:23:33.248 "security": 0 00:23:33.248 }, 00:23:33.248 "serial_number": "SPDK0", 00:23:33.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.248 "vendor_id": "0x8086" 00:23:33.248 }, 00:23:33.248 "ns_data": { 00:23:33.248 "can_share": true, 00:23:33.248 "id": 1 00:23:33.248 }, 00:23:33.248 "trid": { 00:23:33.248 "adrfam": "IPv4", 00:23:33.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.248 "traddr": "10.0.0.2", 00:23:33.248 "trsvcid": "4420", 00:23:33.248 "trtype": "TCP" 00:23:33.248 }, 00:23:33.248 "vs": { 00:23:33.248 "nvme_version": "1.3" 00:23:33.248 } 00:23:33.248 } 00:23:33.248 ] 00:23:33.248 }, 00:23:33.248 "memory_domains": [ 00:23:33.248 { 00:23:33.248 "dma_device_id": "system", 00:23:33.248 "dma_device_type": 1 00:23:33.248 } 00:23:33.248 ], 00:23:33.248 "name": "Nvme0n1", 00:23:33.248 "num_blocks": 38912, 00:23:33.248 "numa_id": -1, 00:23:33.248 "product_name": "NVMe disk", 00:23:33.248 "supported_io_types": { 00:23:33.248 "abort": true, 00:23:33.248 "compare": true, 00:23:33.248 "compare_and_write": true, 00:23:33.248 "copy": true, 00:23:33.248 "flush": true, 00:23:33.248 "get_zone_info": false, 00:23:33.248 "nvme_admin": true, 00:23:33.248 "nvme_io": true, 00:23:33.248 "nvme_io_md": false, 00:23:33.248 "nvme_iov_md": false, 00:23:33.248 "read": true, 00:23:33.248 "reset": true, 00:23:33.248 "seek_data": false, 00:23:33.248 "seek_hole": false, 00:23:33.248 "unmap": true, 00:23:33.248 "write": true, 00:23:33.248 "write_zeroes": true, 00:23:33.248 "zcopy": false, 00:23:33.248 "zone_append": false, 00:23:33.248 "zone_management": false 00:23:33.248 }, 00:23:33.248 "uuid": "1411b170-7735-437f-b221-5947dc0fec7e", 00:23:33.248 "zoned": false 00:23:33.248 } 00:23:33.248 ] 00:23:33.248 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66670 00:23:33.248 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.248 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:33.506 Running I/O for 10 seconds... 00:23:34.442 Latency(us) 00:23:34.442 [2024-12-05T11:06:59.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:34.442 Nvme0n1 : 1.00 9376.00 36.62 0.00 0.00 0.00 0.00 0.00 00:23:34.442 [2024-12-05T11:06:59.094Z] =================================================================================================================== 00:23:34.442 [2024-12-05T11:06:59.094Z] Total : 9376.00 36.62 0.00 0.00 0.00 0.00 0.00 00:23:34.442 00:23:35.378 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:35.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:35.378 Nvme0n1 : 2.00 9463.00 36.96 0.00 0.00 0.00 0.00 0.00 00:23:35.378 [2024-12-05T11:07:00.030Z] =================================================================================================================== 00:23:35.378 [2024-12-05T11:07:00.030Z] Total : 9463.00 36.96 0.00 0.00 0.00 0.00 0.00 00:23:35.378 00:23:35.636 true 00:23:35.636 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:35.636 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:35.896 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:35.896 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:35.896 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66670 00:23:36.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:36.465 Nvme0n1 : 3.00 9376.33 36.63 0.00 0.00 0.00 0.00 0.00 00:23:36.465 [2024-12-05T11:07:01.117Z] =================================================================================================================== 00:23:36.465 [2024-12-05T11:07:01.117Z] Total : 9376.33 36.63 0.00 0.00 0.00 0.00 0.00 00:23:36.465 00:23:37.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:37.415 Nvme0n1 : 4.00 9153.25 35.75 0.00 0.00 0.00 0.00 0.00 00:23:37.415 [2024-12-05T11:07:02.067Z] =================================================================================================================== 00:23:37.415 [2024-12-05T11:07:02.067Z] Total : 9153.25 35.75 0.00 0.00 0.00 0.00 0.00 00:23:37.415 00:23:38.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:38.388 Nvme0n1 : 5.00 9055.80 35.37 0.00 0.00 0.00 0.00 0.00 00:23:38.388 [2024-12-05T11:07:03.040Z] =================================================================================================================== 00:23:38.388 [2024-12-05T11:07:03.040Z] Total : 9055.80 35.37 0.00 0.00 0.00 0.00 0.00 00:23:38.388 00:23:39.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:39.401 Nvme0n1 : 6.00 9004.67 35.17 0.00 0.00 0.00 0.00 0.00 00:23:39.401 [2024-12-05T11:07:04.053Z] =================================================================================================================== 00:23:39.401 [2024-12-05T11:07:04.053Z] Total : 9004.67 35.17 0.00 0.00 0.00 0.00 0.00 00:23:39.401 00:23:40.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:40.394 Nvme0n1 : 7.00 9004.14 35.17 0.00 0.00 0.00 0.00 0.00 00:23:40.394 [2024-12-05T11:07:05.046Z] =================================================================================================================== 00:23:40.394 [2024-12-05T11:07:05.047Z] Total : 9004.14 35.17 0.00 0.00 0.00 0.00 0.00 00:23:40.395 00:23:41.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:41.327 Nvme0n1 : 8.00 8961.00 35.00 0.00 0.00 0.00 0.00 0.00 00:23:41.327 [2024-12-05T11:07:05.979Z] =================================================================================================================== 00:23:41.327 [2024-12-05T11:07:05.979Z] Total : 8961.00 35.00 0.00 0.00 0.00 0.00 0.00 00:23:41.327 00:23:42.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:42.702 Nvme0n1 : 9.00 8949.00 34.96 0.00 0.00 0.00 0.00 0.00 00:23:42.702 [2024-12-05T11:07:07.354Z] =================================================================================================================== 00:23:42.702 [2024-12-05T11:07:07.354Z] Total : 8949.00 34.96 0.00 0.00 0.00 0.00 0.00 00:23:42.702 00:23:43.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:43.637 Nvme0n1 : 10.00 8969.00 35.04 0.00 0.00 0.00 0.00 0.00 00:23:43.637 [2024-12-05T11:07:08.289Z] =================================================================================================================== 00:23:43.637 [2024-12-05T11:07:08.289Z] Total : 8969.00 35.04 0.00 0.00 0.00 0.00 0.00 00:23:43.637 00:23:43.637 00:23:43.637 Latency(us) 00:23:43.637 [2024-12-05T11:07:08.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:43.637 Nvme0n1 : 10.00 8978.07 35.07 0.00 0.00 14252.35 5617.37 115842.68 00:23:43.637 [2024-12-05T11:07:08.289Z] =================================================================================================================== 00:23:43.637 [2024-12-05T11:07:08.289Z] Total : 8978.07 35.07 0.00 0.00 14252.35 5617.37 115842.68 00:23:43.637 { 00:23:43.637 "results": [ 00:23:43.637 { 00:23:43.637 "job": "Nvme0n1", 00:23:43.637 "core_mask": "0x2", 00:23:43.637 "workload": "randwrite", 00:23:43.637 "status": "finished", 00:23:43.637 "queue_depth": 128, 00:23:43.637 "io_size": 4096, 00:23:43.637 "runtime": 10.004152, 00:23:43.637 "iops": 8978.072304379222, 00:23:43.637 "mibps": 35.07059493898134, 00:23:43.637 "io_failed": 0, 00:23:43.637 "io_timeout": 0, 00:23:43.637 "avg_latency_us": 14252.352458951382, 00:23:43.637 "min_latency_us": 5617.371428571429, 00:23:43.637 "max_latency_us": 115842.6819047619 00:23:43.637 } 00:23:43.637 ], 00:23:43.637 "core_count": 1 00:23:43.637 } 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66617 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66617 ']' 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66617 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66617 00:23:43.637 killing process with pid 66617 00:23:43.637 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.637 00:23:43.637 Latency(us) 00:23:43.637 [2024-12-05T11:07:08.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.637 [2024-12-05T11:07:08.289Z] =================================================================================================================== 00:23:43.637 [2024-12-05T11:07:08.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.637 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.638 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66617' 00:23:43.638 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66617 00:23:43.638 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66617 00:23:43.638 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:43.896 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:44.462 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:44.462 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:23:44.721 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:23:44.721 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:23:44.721 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:44.980 [2024-12-05 11:07:09.452201] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:44.980 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:45.238 2024/12/05 11:07:09 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:1e693457-43cd-4359-9d74-64a22bb4c479], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:23:45.238 request: 00:23:45.238 { 00:23:45.238 "method": "bdev_lvol_get_lvstores", 00:23:45.238 "params": { 00:23:45.238 "uuid": "1e693457-43cd-4359-9d74-64a22bb4c479" 00:23:45.238 } 00:23:45.238 } 00:23:45.238 Got JSON-RPC error response 00:23:45.238 GoRPCClient: error on JSON-RPC call 00:23:45.238 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:23:45.238 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.238 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.238 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.238 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:45.804 aio_bdev 00:23:45.804 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1411b170-7735-437f-b221-5947dc0fec7e 00:23:45.804 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1411b170-7735-437f-b221-5947dc0fec7e 00:23:45.804 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:45.804 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:23:45.804 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:45.804 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:45.804 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:46.062 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1411b170-7735-437f-b221-5947dc0fec7e -t 2000 00:23:46.062 [ 00:23:46.062 { 00:23:46.062 "aliases": [ 00:23:46.062 "lvs/lvol" 00:23:46.062 ], 00:23:46.062 "assigned_rate_limits": { 00:23:46.062 "r_mbytes_per_sec": 0, 00:23:46.062 "rw_ios_per_sec": 0, 00:23:46.062 "rw_mbytes_per_sec": 0, 00:23:46.062 "w_mbytes_per_sec": 0 00:23:46.062 }, 00:23:46.062 "block_size": 4096, 00:23:46.062 "claimed": false, 00:23:46.062 "driver_specific": { 00:23:46.062 "lvol": { 00:23:46.062 "base_bdev": "aio_bdev", 00:23:46.062 "clone": false, 00:23:46.062 "esnap_clone": false, 00:23:46.062 "lvol_store_uuid": "1e693457-43cd-4359-9d74-64a22bb4c479", 00:23:46.062 "num_allocated_clusters": 38, 00:23:46.062 "snapshot": false, 00:23:46.062 "thin_provision": false 00:23:46.062 } 00:23:46.062 }, 00:23:46.062 "name": "1411b170-7735-437f-b221-5947dc0fec7e", 00:23:46.062 "num_blocks": 38912, 00:23:46.062 "product_name": "Logical Volume", 00:23:46.062 "supported_io_types": { 00:23:46.062 "abort": false, 00:23:46.062 "compare": false, 00:23:46.062 "compare_and_write": false, 00:23:46.062 "copy": false, 00:23:46.062 "flush": false, 00:23:46.062 "get_zone_info": false, 00:23:46.062 "nvme_admin": false, 00:23:46.062 "nvme_io": false, 00:23:46.062 "nvme_io_md": false, 00:23:46.062 "nvme_iov_md": false, 00:23:46.062 "read": true, 00:23:46.062 "reset": true, 00:23:46.062 "seek_data": true, 00:23:46.062 "seek_hole": true, 00:23:46.062 "unmap": true, 00:23:46.062 "write": true, 00:23:46.062 "write_zeroes": true, 00:23:46.062 "zcopy": false, 00:23:46.062 "zone_append": false, 00:23:46.062 "zone_management": false 00:23:46.062 }, 00:23:46.062 "uuid": "1411b170-7735-437f-b221-5947dc0fec7e", 00:23:46.062 "zoned": false 00:23:46.062 } 00:23:46.062 ] 00:23:46.321 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:23:46.321 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:46.321 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:23:46.579 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:23:46.579 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:46.579 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:23:46.838 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:23:46.838 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1411b170-7735-437f-b221-5947dc0fec7e 00:23:47.095 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e693457-43cd-4359-9d74-64a22bb4c479 00:23:47.662 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:47.662 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:48.228 ************************************ 00:23:48.228 END TEST lvs_grow_clean 00:23:48.228 ************************************ 00:23:48.228 00:23:48.228 real 0m19.626s 00:23:48.228 user 0m18.166s 00:23:48.228 sys 0m3.189s 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:48.228 ************************************ 00:23:48.228 START TEST lvs_grow_dirty 00:23:48.228 ************************************ 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:48.228 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:48.486 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:48.745 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:48.745 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:49.004 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2af08fc7-4959-4d45-9f18-b855103f9e13 00:23:49.004 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:23:49.004 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:49.262 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:49.262 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:49.262 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2af08fc7-4959-4d45-9f18-b855103f9e13 lvol 150 00:23:49.521 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c1523696-ff8c-4716-a739-5025a9171952 00:23:49.521 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:49.521 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:49.780 [2024-12-05 11:07:14.267341] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:49.780 [2024-12-05 11:07:14.267420] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:49.780 true 00:23:49.780 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:23:49.780 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:50.040 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:50.040 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:50.332 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c1523696-ff8c-4716-a739-5025a9171952 00:23:50.897 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:50.897 [2024-12-05 11:07:15.508728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.897 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67078 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67078 /var/tmp/bdevperf.sock 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67078 ']' 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.183 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:51.183 [2024-12-05 11:07:15.823215] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:23:51.183 [2024-12-05 11:07:15.823315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67078 ] 00:23:51.441 [2024-12-05 11:07:15.975543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.441 [2024-12-05 11:07:16.039950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.373 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.373 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:23:52.373 11:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:52.632 Nvme0n1 00:23:52.632 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:52.909 [ 00:23:52.909 { 00:23:52.909 "aliases": [ 00:23:52.909 "c1523696-ff8c-4716-a739-5025a9171952" 00:23:52.909 ], 00:23:52.909 "assigned_rate_limits": { 00:23:52.909 "r_mbytes_per_sec": 0, 00:23:52.909 "rw_ios_per_sec": 0, 00:23:52.909 "rw_mbytes_per_sec": 0, 00:23:52.909 "w_mbytes_per_sec": 0 00:23:52.909 }, 00:23:52.909 "block_size": 4096, 00:23:52.909 "claimed": false, 00:23:52.909 "driver_specific": { 00:23:52.909 "mp_policy": "active_passive", 00:23:52.909 "nvme": [ 00:23:52.909 { 00:23:52.909 "ctrlr_data": { 00:23:52.909 "ana_reporting": false, 00:23:52.909 "cntlid": 1, 00:23:52.909 "firmware_revision": "25.01", 00:23:52.909 "model_number": "SPDK bdev Controller", 00:23:52.910 "multi_ctrlr": true, 00:23:52.910 "oacs": { 00:23:52.910 "firmware": 0, 00:23:52.910 "format": 0, 00:23:52.910 "ns_manage": 0, 00:23:52.910 "security": 0 00:23:52.910 }, 00:23:52.910 "serial_number": "SPDK0", 00:23:52.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.910 "vendor_id": "0x8086" 00:23:52.910 }, 00:23:52.910 "ns_data": { 00:23:52.910 "can_share": true, 00:23:52.910 "id": 1 00:23:52.910 }, 00:23:52.910 "trid": { 00:23:52.910 "adrfam": "IPv4", 00:23:52.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.910 "traddr": "10.0.0.2", 00:23:52.910 "trsvcid": "4420", 00:23:52.910 "trtype": "TCP" 00:23:52.910 }, 00:23:52.910 "vs": { 00:23:52.910 "nvme_version": "1.3" 00:23:52.910 } 00:23:52.910 } 00:23:52.910 ] 00:23:52.910 }, 00:23:52.910 "memory_domains": [ 00:23:52.910 { 00:23:52.910 "dma_device_id": "system", 00:23:52.910 "dma_device_type": 1 00:23:52.910 } 00:23:52.910 ], 00:23:52.910 "name": "Nvme0n1", 00:23:52.910 "num_blocks": 38912, 00:23:52.910 "numa_id": -1, 00:23:52.910 "product_name": "NVMe disk", 00:23:52.910 "supported_io_types": { 00:23:52.910 "abort": true, 00:23:52.910 "compare": true, 00:23:52.910 "compare_and_write": true, 00:23:52.910 "copy": true, 00:23:52.910 "flush": true, 00:23:52.910 "get_zone_info": false, 00:23:52.910 "nvme_admin": true, 00:23:52.910 "nvme_io": true, 00:23:52.910 "nvme_io_md": false, 00:23:52.910 "nvme_iov_md": false, 00:23:52.910 "read": true, 00:23:52.910 "reset": true, 00:23:52.910 "seek_data": false, 00:23:52.910 "seek_hole": false, 00:23:52.910 "unmap": true, 00:23:52.910 "write": true, 00:23:52.910 "write_zeroes": true, 00:23:52.910 "zcopy": false, 00:23:52.910 "zone_append": false, 00:23:52.910 "zone_management": false 00:23:52.910 }, 00:23:52.910 "uuid": "c1523696-ff8c-4716-a739-5025a9171952", 00:23:52.910 "zoned": false 00:23:52.910 } 00:23:52.910 ] 00:23:52.910 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.910 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67127 00:23:52.910 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:52.910 Running I/O for 10 seconds... 00:23:54.286 Latency(us) 00:23:54.286 [2024-12-05T11:07:18.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:54.286 Nvme0n1 : 1.00 9902.00 38.68 0.00 0.00 0.00 0.00 0.00 00:23:54.286 [2024-12-05T11:07:18.938Z] =================================================================================================================== 00:23:54.286 [2024-12-05T11:07:18.938Z] Total : 9902.00 38.68 0.00 0.00 0.00 0.00 0.00 00:23:54.286 00:23:54.853 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:23:55.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:55.111 Nvme0n1 : 2.00 9565.50 37.37 0.00 0.00 0.00 0.00 0.00 00:23:55.111 [2024-12-05T11:07:19.763Z] =================================================================================================================== 00:23:55.111 [2024-12-05T11:07:19.763Z] Total : 9565.50 37.37 0.00 0.00 0.00 0.00 0.00 00:23:55.111 00:23:55.111 true 00:23:55.111 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:23:55.111 11:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:55.678 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:55.678 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:55.678 11:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67127 00:23:55.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:55.937 Nvme0n1 : 3.00 9385.00 36.66 0.00 0.00 0.00 0.00 0.00 00:23:55.937 [2024-12-05T11:07:20.589Z] =================================================================================================================== 00:23:55.937 [2024-12-05T11:07:20.589Z] Total : 9385.00 36.66 0.00 0.00 0.00 0.00 0.00 00:23:55.937 00:23:56.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:56.967 Nvme0n1 : 4.00 9215.00 36.00 0.00 0.00 0.00 0.00 0.00 00:23:56.967 [2024-12-05T11:07:21.619Z] =================================================================================================================== 00:23:56.967 [2024-12-05T11:07:21.619Z] Total : 9215.00 36.00 0.00 0.00 0.00 0.00 0.00 00:23:56.967 00:23:57.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:57.903 Nvme0n1 : 5.00 9169.00 35.82 0.00 0.00 0.00 0.00 0.00 00:23:57.903 [2024-12-05T11:07:22.555Z] =================================================================================================================== 00:23:57.903 [2024-12-05T11:07:22.555Z] Total : 9169.00 35.82 0.00 0.00 0.00 0.00 0.00 00:23:57.903 00:23:59.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:59.279 Nvme0n1 : 6.00 8900.83 34.77 0.00 0.00 0.00 0.00 0.00 00:23:59.279 [2024-12-05T11:07:23.931Z] =================================================================================================================== 00:23:59.279 [2024-12-05T11:07:23.931Z] Total : 8900.83 34.77 0.00 0.00 0.00 0.00 0.00 00:23:59.279 00:24:00.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:00.216 Nvme0n1 : 7.00 8789.00 34.33 0.00 0.00 0.00 0.00 0.00 00:24:00.216 [2024-12-05T11:07:24.868Z] =================================================================================================================== 00:24:00.216 [2024-12-05T11:07:24.868Z] Total : 8789.00 34.33 0.00 0.00 0.00 0.00 0.00 00:24:00.216 00:24:01.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:01.151 Nvme0n1 : 8.00 8608.50 33.63 0.00 0.00 0.00 0.00 0.00 00:24:01.151 [2024-12-05T11:07:25.803Z] =================================================================================================================== 00:24:01.151 [2024-12-05T11:07:25.803Z] Total : 8608.50 33.63 0.00 0.00 0.00 0.00 0.00 00:24:01.151 00:24:02.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:02.086 Nvme0n1 : 9.00 8613.89 33.65 0.00 0.00 0.00 0.00 0.00 00:24:02.086 [2024-12-05T11:07:26.738Z] =================================================================================================================== 00:24:02.086 [2024-12-05T11:07:26.738Z] Total : 8613.89 33.65 0.00 0.00 0.00 0.00 0.00 00:24:02.086 00:24:03.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:03.020 Nvme0n1 : 10.00 8603.20 33.61 0.00 0.00 0.00 0.00 0.00 00:24:03.020 [2024-12-05T11:07:27.672Z] =================================================================================================================== 00:24:03.020 [2024-12-05T11:07:27.672Z] Total : 8603.20 33.61 0.00 0.00 0.00 0.00 0.00 00:24:03.020 00:24:03.020 00:24:03.020 Latency(us) 00:24:03.020 [2024-12-05T11:07:27.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:03.020 Nvme0n1 : 10.01 8604.40 33.61 0.00 0.00 14868.49 4337.86 184749.10 00:24:03.020 [2024-12-05T11:07:27.672Z] =================================================================================================================== 00:24:03.020 [2024-12-05T11:07:27.672Z] Total : 8604.40 33.61 0.00 0.00 14868.49 4337.86 184749.10 00:24:03.020 { 00:24:03.020 "results": [ 00:24:03.020 { 00:24:03.020 "job": "Nvme0n1", 00:24:03.020 "core_mask": "0x2", 00:24:03.020 "workload": "randwrite", 00:24:03.020 "status": "finished", 00:24:03.020 "queue_depth": 128, 00:24:03.020 "io_size": 4096, 00:24:03.020 "runtime": 10.013478, 00:24:03.020 "iops": 8604.402985655934, 00:24:03.020 "mibps": 33.61094916271849, 00:24:03.020 "io_failed": 0, 00:24:03.020 "io_timeout": 0, 00:24:03.020 "avg_latency_us": 14868.489055135517, 00:24:03.020 "min_latency_us": 4337.8590476190475, 00:24:03.020 "max_latency_us": 184749.10476190477 00:24:03.020 } 00:24:03.020 ], 00:24:03.020 "core_count": 1 00:24:03.020 } 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67078 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 67078 ']' 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 67078 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67078 00:24:03.020 killing process with pid 67078 00:24:03.020 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.020 00:24:03.020 Latency(us) 00:24:03.020 [2024-12-05T11:07:27.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.020 [2024-12-05T11:07:27.672Z] =================================================================================================================== 00:24:03.020 [2024-12-05T11:07:27.672Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67078' 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 67078 00:24:03.020 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 67078 00:24:03.278 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.609 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:03.882 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:03.882 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66469 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66469 00:24:04.140 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66469 Killed "${NVMF_APP[@]}" "$@" 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:04.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=67295 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 67295 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67295 ']' 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.140 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:04.140 [2024-12-05 11:07:28.717552] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:04.140 [2024-12-05 11:07:28.717679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.398 [2024-12-05 11:07:28.880953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.398 [2024-12-05 11:07:28.943539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.398 [2024-12-05 11:07:28.943618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.398 [2024-12-05 11:07:28.943635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.398 [2024-12-05 11:07:28.943649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.398 [2024-12-05 11:07:28.943660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.398 [2024-12-05 11:07:28.944053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.656 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.657 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:24:04.657 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:04.657 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.657 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:04.657 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.657 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:04.915 [2024-12-05 11:07:29.344831] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:04.915 [2024-12-05 11:07:29.345297] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:04.915 [2024-12-05 11:07:29.345427] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c1523696-ff8c-4716-a739-5025a9171952 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c1523696-ff8c-4716-a739-5025a9171952 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:04.915 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:05.173 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1523696-ff8c-4716-a739-5025a9171952 -t 2000 00:24:05.431 [ 00:24:05.431 { 00:24:05.431 "aliases": [ 00:24:05.431 "lvs/lvol" 00:24:05.431 ], 00:24:05.431 "assigned_rate_limits": { 00:24:05.431 "r_mbytes_per_sec": 0, 00:24:05.431 "rw_ios_per_sec": 0, 00:24:05.431 "rw_mbytes_per_sec": 0, 00:24:05.431 "w_mbytes_per_sec": 0 00:24:05.431 }, 00:24:05.431 "block_size": 4096, 00:24:05.431 "claimed": false, 00:24:05.431 "driver_specific": { 00:24:05.431 "lvol": { 00:24:05.431 "base_bdev": "aio_bdev", 00:24:05.431 "clone": false, 00:24:05.431 "esnap_clone": false, 00:24:05.431 "lvol_store_uuid": "2af08fc7-4959-4d45-9f18-b855103f9e13", 00:24:05.431 "num_allocated_clusters": 38, 00:24:05.431 "snapshot": false, 00:24:05.431 "thin_provision": false 00:24:05.431 } 00:24:05.431 }, 00:24:05.431 "name": "c1523696-ff8c-4716-a739-5025a9171952", 00:24:05.431 "num_blocks": 38912, 00:24:05.431 "product_name": "Logical Volume", 00:24:05.431 "supported_io_types": { 00:24:05.431 "abort": false, 00:24:05.431 "compare": false, 00:24:05.431 "compare_and_write": false, 00:24:05.431 "copy": false, 00:24:05.431 "flush": false, 00:24:05.431 "get_zone_info": false, 00:24:05.431 "nvme_admin": false, 00:24:05.431 "nvme_io": false, 00:24:05.431 "nvme_io_md": false, 00:24:05.431 "nvme_iov_md": false, 00:24:05.431 "read": true, 00:24:05.431 "reset": true, 00:24:05.431 "seek_data": true, 00:24:05.431 "seek_hole": true, 00:24:05.431 "unmap": true, 00:24:05.431 "write": true, 00:24:05.431 "write_zeroes": true, 00:24:05.431 "zcopy": false, 00:24:05.431 "zone_append": false, 00:24:05.431 "zone_management": false 00:24:05.431 }, 00:24:05.431 "uuid": "c1523696-ff8c-4716-a739-5025a9171952", 00:24:05.431 "zoned": false 00:24:05.431 } 00:24:05.431 ] 00:24:05.431 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:24:05.431 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:05.431 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:24:05.689 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:24:05.689 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:24:05.689 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:05.948 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:24:05.948 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:06.207 [2024-12-05 11:07:30.806359] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:06.207 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:06.466 2024/12/05 11:07:31 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:2af08fc7-4959-4d45-9f18-b855103f9e13], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:06.466 request: 00:24:06.466 { 00:24:06.466 "method": "bdev_lvol_get_lvstores", 00:24:06.466 "params": { 00:24:06.466 "uuid": "2af08fc7-4959-4d45-9f18-b855103f9e13" 00:24:06.466 } 00:24:06.466 } 00:24:06.466 Got JSON-RPC error response 00:24:06.466 GoRPCClient: error on JSON-RPC call 00:24:06.466 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:24:06.466 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.466 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.466 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.466 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:06.724 aio_bdev 00:24:06.724 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c1523696-ff8c-4716-a739-5025a9171952 00:24:06.724 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c1523696-ff8c-4716-a739-5025a9171952 00:24:06.724 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:06.724 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:24:06.724 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:06.724 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:06.724 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:07.290 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1523696-ff8c-4716-a739-5025a9171952 -t 2000 00:24:07.547 [ 00:24:07.547 { 00:24:07.547 "aliases": [ 00:24:07.547 "lvs/lvol" 00:24:07.547 ], 00:24:07.547 "assigned_rate_limits": { 00:24:07.547 "r_mbytes_per_sec": 0, 00:24:07.547 "rw_ios_per_sec": 0, 00:24:07.547 "rw_mbytes_per_sec": 0, 00:24:07.547 "w_mbytes_per_sec": 0 00:24:07.547 }, 00:24:07.547 "block_size": 4096, 00:24:07.547 "claimed": false, 00:24:07.547 "driver_specific": { 00:24:07.547 "lvol": { 00:24:07.547 "base_bdev": "aio_bdev", 00:24:07.547 "clone": false, 00:24:07.547 "esnap_clone": false, 00:24:07.547 "lvol_store_uuid": "2af08fc7-4959-4d45-9f18-b855103f9e13", 00:24:07.547 "num_allocated_clusters": 38, 00:24:07.547 "snapshot": false, 00:24:07.547 "thin_provision": false 00:24:07.547 } 00:24:07.547 }, 00:24:07.547 "name": "c1523696-ff8c-4716-a739-5025a9171952", 00:24:07.547 "num_blocks": 38912, 00:24:07.547 "product_name": "Logical Volume", 00:24:07.547 "supported_io_types": { 00:24:07.547 "abort": false, 00:24:07.547 "compare": false, 00:24:07.547 "compare_and_write": false, 00:24:07.547 "copy": false, 00:24:07.547 "flush": false, 00:24:07.547 "get_zone_info": false, 00:24:07.547 "nvme_admin": false, 00:24:07.547 "nvme_io": false, 00:24:07.547 "nvme_io_md": false, 00:24:07.547 "nvme_iov_md": false, 00:24:07.547 "read": true, 00:24:07.547 "reset": true, 00:24:07.547 "seek_data": true, 00:24:07.547 "seek_hole": true, 00:24:07.547 "unmap": true, 00:24:07.547 "write": true, 00:24:07.547 "write_zeroes": true, 00:24:07.547 "zcopy": false, 00:24:07.547 "zone_append": false, 00:24:07.547 "zone_management": false 00:24:07.547 }, 00:24:07.547 "uuid": "c1523696-ff8c-4716-a739-5025a9171952", 00:24:07.547 "zoned": false 00:24:07.547 } 00:24:07.547 ] 00:24:07.547 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:24:07.547 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:07.547 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:07.806 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:07.806 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:07.806 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:08.064 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:08.064 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c1523696-ff8c-4716-a739-5025a9171952 00:24:08.631 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2af08fc7-4959-4d45-9f18-b855103f9e13 00:24:08.889 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:09.148 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:09.716 ************************************ 00:24:09.716 END TEST lvs_grow_dirty 00:24:09.716 ************************************ 00:24:09.716 00:24:09.716 real 0m21.325s 00:24:09.716 user 0m43.562s 00:24:09.716 sys 0m8.070s 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:09.716 nvmf_trace.0 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:09.716 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:10.283 rmmod nvme_tcp 00:24:10.283 rmmod nvme_fabrics 00:24:10.283 rmmod nvme_keyring 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 67295 ']' 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 67295 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67295 ']' 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67295 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67295 00:24:10.283 killing process with pid 67295 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67295' 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67295 00:24:10.283 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67295 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:10.558 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:10.559 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:10.559 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:10.559 11:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:10.559 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:24:10.560 00:24:10.560 real 0m43.518s 00:24:10.560 user 1m8.723s 00:24:10.560 sys 0m12.386s 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:10.560 ************************************ 00:24:10.560 END TEST nvmf_lvs_grow 00:24:10.560 ************************************ 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:24:10.560 ************************************ 00:24:10.560 START TEST nvmf_bdev_io_wait 00:24:10.560 ************************************ 00:24:10.560 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:10.834 * Looking for test storage... 00:24:10.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.834 --rc genhtml_branch_coverage=1 00:24:10.834 --rc genhtml_function_coverage=1 00:24:10.834 --rc genhtml_legend=1 00:24:10.834 --rc geninfo_all_blocks=1 00:24:10.834 --rc geninfo_unexecuted_blocks=1 00:24:10.834 00:24:10.834 ' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.834 --rc genhtml_branch_coverage=1 00:24:10.834 --rc genhtml_function_coverage=1 00:24:10.834 --rc genhtml_legend=1 00:24:10.834 --rc geninfo_all_blocks=1 00:24:10.834 --rc geninfo_unexecuted_blocks=1 00:24:10.834 00:24:10.834 ' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.834 --rc genhtml_branch_coverage=1 00:24:10.834 --rc genhtml_function_coverage=1 00:24:10.834 --rc genhtml_legend=1 00:24:10.834 --rc geninfo_all_blocks=1 00:24:10.834 --rc geninfo_unexecuted_blocks=1 00:24:10.834 00:24:10.834 ' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.834 --rc genhtml_branch_coverage=1 00:24:10.834 --rc genhtml_function_coverage=1 00:24:10.834 --rc genhtml_legend=1 00:24:10.834 --rc geninfo_all_blocks=1 00:24:10.834 --rc geninfo_unexecuted_blocks=1 00:24:10.834 00:24:10.834 ' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:10.834 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:10.834 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@223 -- # create_target_ns 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target0 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:10.835 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:11.094 10.0.0.1 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:11.094 10.0.0.2 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target1 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:11.094 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772163 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:11.095 10.0.0.3 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772164 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:11.095 10.0.0.4 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:11.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:24:11.095 00:24:11.095 --- 10.0.0.1 ping statistics --- 00:24:11.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.095 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.095 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:11.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:24:11.354 00:24:11.354 --- 10.0.0.2 ping statistics --- 00:24:11.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.354 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:11.354 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:11.354 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:24:11.354 00:24:11.354 --- 10.0.0.3 ping statistics --- 00:24:11.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.354 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:11.354 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:11.354 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:24:11.354 00:24:11.354 --- 10.0.0.4 ping statistics --- 00:24:11.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.354 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # return 0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:11.354 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:11.355 ' 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=67758 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 67758 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67758 ']' 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.355 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.355 [2024-12-05 11:07:35.968043] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:11.355 [2024-12-05 11:07:35.968164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.613 [2024-12-05 11:07:36.116793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.613 [2024-12-05 11:07:36.179173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.613 [2024-12-05 11:07:36.179235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.613 [2024-12-05 11:07:36.179247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.613 [2024-12-05 11:07:36.179256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.613 [2024-12-05 11:07:36.179265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.613 [2024-12-05 11:07:36.180235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.613 [2024-12-05 11:07:36.180387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.613 [2024-12-05 11:07:36.180441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.613 [2024-12-05 11:07:36.180443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.613 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.613 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:24:11.613 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:11.613 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.613 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 [2024-12-05 11:07:36.369741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 Malloc0 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:11.872 [2024-12-05 11:07:36.418111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67798 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:11.872 { 00:24:11.872 "params": { 00:24:11.872 "name": "Nvme$subsystem", 00:24:11.872 "trtype": "$TEST_TRANSPORT", 00:24:11.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.872 "adrfam": "ipv4", 00:24:11.872 "trsvcid": "$NVMF_PORT", 00:24:11.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.872 "hdgst": ${hdgst:-false}, 00:24:11.872 "ddgst": ${ddgst:-false} 00:24:11.872 }, 00:24:11.872 "method": "bdev_nvme_attach_controller" 00:24:11.872 } 00:24:11.872 EOF 00:24:11.872 )") 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67800 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67803 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:11.872 { 00:24:11.872 "params": { 00:24:11.872 "name": "Nvme$subsystem", 00:24:11.872 "trtype": "$TEST_TRANSPORT", 00:24:11.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.872 "adrfam": "ipv4", 00:24:11.872 "trsvcid": "$NVMF_PORT", 00:24:11.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.872 "hdgst": ${hdgst:-false}, 00:24:11.872 "ddgst": ${ddgst:-false} 00:24:11.872 }, 00:24:11.872 "method": "bdev_nvme_attach_controller" 00:24:11.872 } 00:24:11.872 EOF 00:24:11.872 )") 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67805 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:11.872 { 00:24:11.872 "params": { 00:24:11.872 "name": "Nvme$subsystem", 00:24:11.872 "trtype": "$TEST_TRANSPORT", 00:24:11.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.872 "adrfam": "ipv4", 00:24:11.872 "trsvcid": "$NVMF_PORT", 00:24:11.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.872 "hdgst": ${hdgst:-false}, 00:24:11.872 "ddgst": ${ddgst:-false} 00:24:11.872 }, 00:24:11.872 "method": "bdev_nvme_attach_controller" 00:24:11.872 } 00:24:11.872 EOF 00:24:11.872 )") 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:11.872 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:11.872 { 00:24:11.872 "params": { 00:24:11.872 "name": "Nvme$subsystem", 00:24:11.872 "trtype": "$TEST_TRANSPORT", 00:24:11.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.872 "adrfam": "ipv4", 00:24:11.872 "trsvcid": "$NVMF_PORT", 00:24:11.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.873 "hdgst": ${hdgst:-false}, 00:24:11.873 "ddgst": ${ddgst:-false} 00:24:11.873 }, 00:24:11.873 "method": "bdev_nvme_attach_controller" 00:24:11.873 } 00:24:11.873 EOF 00:24:11.873 )") 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:11.873 "params": { 00:24:11.873 "name": "Nvme1", 00:24:11.873 "trtype": "tcp", 00:24:11.873 "traddr": "10.0.0.2", 00:24:11.873 "adrfam": "ipv4", 00:24:11.873 "trsvcid": "4420", 00:24:11.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.873 "hdgst": false, 00:24:11.873 "ddgst": false 00:24:11.873 }, 00:24:11.873 "method": "bdev_nvme_attach_controller" 00:24:11.873 }' 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:11.873 "params": { 00:24:11.873 "name": "Nvme1", 00:24:11.873 "trtype": "tcp", 00:24:11.873 "traddr": "10.0.0.2", 00:24:11.873 "adrfam": "ipv4", 00:24:11.873 "trsvcid": "4420", 00:24:11.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.873 "hdgst": false, 00:24:11.873 "ddgst": false 00:24:11.873 }, 00:24:11.873 "method": "bdev_nvme_attach_controller" 00:24:11.873 }' 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:11.873 "params": { 00:24:11.873 "name": "Nvme1", 00:24:11.873 "trtype": "tcp", 00:24:11.873 "traddr": "10.0.0.2", 00:24:11.873 "adrfam": "ipv4", 00:24:11.873 "trsvcid": "4420", 00:24:11.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.873 "hdgst": false, 00:24:11.873 "ddgst": false 00:24:11.873 }, 00:24:11.873 "method": "bdev_nvme_attach_controller" 00:24:11.873 }' 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:24:11.873 [2024-12-05 11:07:36.471351] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:11.873 [2024-12-05 11:07:36.471430] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:11.873 "params": { 00:24:11.873 "name": "Nvme1", 00:24:11.873 "trtype": "tcp", 00:24:11.873 "traddr": "10.0.0.2", 00:24:11.873 "adrfam": "ipv4", 00:24:11.873 "trsvcid": "4420", 00:24:11.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.873 "hdgst": false, 00:24:11.873 "ddgst": false 00:24:11.873 }, 00:24:11.873 "method": "bdev_nvme_attach_controller" 00:24:11.873 }' 00:24:11.873 [2024-12-05 11:07:36.486179] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:11.873 [2024-12-05 11:07:36.486276] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:11.873 [2024-12-05 11:07:36.497658] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:11.873 [2024-12-05 11:07:36.497752] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:24:11.873 11:07:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67798 00:24:12.131 [2024-12-05 11:07:36.527091] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:12.131 [2024-12-05 11:07:36.527727] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:24:12.131 [2024-12-05 11:07:36.693450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.131 [2024-12-05 11:07:36.754870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:12.131 [2024-12-05 11:07:36.766057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.390 [2024-12-05 11:07:36.813745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:12.390 Running I/O for 1 seconds... 00:24:12.390 [2024-12-05 11:07:36.891411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.390 [2024-12-05 11:07:36.892212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.390 Running I/O for 1 seconds... 00:24:12.390 [2024-12-05 11:07:36.955533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:12.390 [2024-12-05 11:07:36.957010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:12.650 Running I/O for 1 seconds... 00:24:12.650 Running I/O for 1 seconds... 00:24:13.600 172616.00 IOPS, 674.28 MiB/s 00:24:13.600 Latency(us) 00:24:13.600 [2024-12-05T11:07:38.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.600 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:24:13.600 Nvme1n1 : 1.00 172245.06 672.83 0.00 0.00 739.08 331.58 2122.12 00:24:13.600 [2024-12-05T11:07:38.252Z] =================================================================================================================== 00:24:13.600 [2024-12-05T11:07:38.252Z] Total : 172245.06 672.83 0.00 0.00 739.08 331.58 2122.12 00:24:13.600 10633.00 IOPS, 41.54 MiB/s 00:24:13.600 Latency(us) 00:24:13.600 [2024-12-05T11:07:38.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.600 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:24:13.600 Nvme1n1 : 1.01 10689.78 41.76 0.00 0.00 11926.30 6366.35 19723.22 00:24:13.600 [2024-12-05T11:07:38.252Z] =================================================================================================================== 00:24:13.600 [2024-12-05T11:07:38.252Z] Total : 10689.78 41.76 0.00 0.00 11926.30 6366.35 19723.22 00:24:13.600 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67800 00:24:13.600 8496.00 IOPS, 33.19 MiB/s 00:24:13.600 Latency(us) 00:24:13.600 [2024-12-05T11:07:38.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.600 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:24:13.600 Nvme1n1 : 1.01 8570.12 33.48 0.00 0.00 14879.00 5398.92 22469.49 00:24:13.600 [2024-12-05T11:07:38.252Z] =================================================================================================================== 00:24:13.600 [2024-12-05T11:07:38.252Z] Total : 8570.12 33.48 0.00 0.00 14879.00 5398.92 22469.49 00:24:13.600 7518.00 IOPS, 29.37 MiB/s 00:24:13.600 Latency(us) 00:24:13.600 [2024-12-05T11:07:38.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.600 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:24:13.600 Nvme1n1 : 1.01 7588.65 29.64 0.00 0.00 16793.38 6522.39 31082.79 00:24:13.600 [2024-12-05T11:07:38.252Z] =================================================================================================================== 00:24:13.600 [2024-12-05T11:07:38.252Z] Total : 7588.65 29.64 0.00 0.00 16793.38 6522.39 31082.79 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67803 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67805 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:13.864 rmmod nvme_tcp 00:24:13.864 rmmod nvme_fabrics 00:24:13.864 rmmod nvme_keyring 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 67758 ']' 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 67758 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67758 ']' 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67758 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67758 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.864 killing process with pid 67758 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67758' 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67758 00:24:13.864 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67758 00:24:14.124 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:14.124 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:24:14.124 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:24:14.124 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:14.124 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:24:14.125 00:24:14.125 real 0m3.541s 00:24:14.125 user 0m14.036s 00:24:14.125 sys 0m2.313s 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:14.125 ************************************ 00:24:14.125 END TEST nvmf_bdev_io_wait 00:24:14.125 ************************************ 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:14.125 11:07:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:24:14.384 ************************************ 00:24:14.384 START TEST nvmf_queue_depth 00:24:14.384 ************************************ 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:14.384 * Looking for test storage... 00:24:14.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:14.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.384 --rc genhtml_branch_coverage=1 00:24:14.384 --rc genhtml_function_coverage=1 00:24:14.384 --rc genhtml_legend=1 00:24:14.384 --rc geninfo_all_blocks=1 00:24:14.384 --rc geninfo_unexecuted_blocks=1 00:24:14.384 00:24:14.384 ' 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:14.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.384 --rc genhtml_branch_coverage=1 00:24:14.384 --rc genhtml_function_coverage=1 00:24:14.384 --rc genhtml_legend=1 00:24:14.384 --rc geninfo_all_blocks=1 00:24:14.384 --rc geninfo_unexecuted_blocks=1 00:24:14.384 00:24:14.384 ' 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:14.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.384 --rc genhtml_branch_coverage=1 00:24:14.384 --rc genhtml_function_coverage=1 00:24:14.384 --rc genhtml_legend=1 00:24:14.384 --rc geninfo_all_blocks=1 00:24:14.384 --rc geninfo_unexecuted_blocks=1 00:24:14.384 00:24:14.384 ' 00:24:14.384 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:14.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.384 --rc genhtml_branch_coverage=1 00:24:14.384 --rc genhtml_function_coverage=1 00:24:14.384 --rc genhtml_legend=1 00:24:14.385 --rc geninfo_all_blocks=1 00:24:14.385 --rc geninfo_unexecuted_blocks=1 00:24:14.385 00:24:14.385 ' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:14.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@223 -- # create_target_ns 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:14.385 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:14.386 11:07:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target0 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.386 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:14.646 10.0.0.1 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:14.646 10.0.0.2 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:14.646 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target1 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772163 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:14.647 10.0.0.3 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772164 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:14.647 10.0.0.4 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:14.647 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:14.648 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:24:14.907 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:14.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:14.908 00:24:14.908 --- 10.0.0.1 ping statistics --- 00:24:14.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.908 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:14.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:24:14.908 00:24:14.908 --- 10.0.0.2 ping statistics --- 00:24:14.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.908 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:14.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:14.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:24:14.908 00:24:14.908 --- 10.0.0.3 ping statistics --- 00:24:14.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.908 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:14.908 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:14.908 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:24:14.908 00:24:14.908 --- 10.0.0.4 ping statistics --- 00:24:14.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.908 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # return 0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:24:14.908 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:14.909 ' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=68068 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 68068 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68068 ']' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:14.909 11:07:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:14.909 [2024-12-05 11:07:39.542645] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:14.909 [2024-12-05 11:07:39.542750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.168 [2024-12-05 11:07:39.701516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.168 [2024-12-05 11:07:39.757366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.168 [2024-12-05 11:07:39.757427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.168 [2024-12-05 11:07:39.757439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.168 [2024-12-05 11:07:39.757449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.168 [2024-12-05 11:07:39.757458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.168 [2024-12-05 11:07:39.757804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 [2024-12-05 11:07:40.589713] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 Malloc0 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 [2024-12-05 11:07:40.635239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68124 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68124 /var/tmp/bdevperf.sock 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68124 ']' 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.153 11:07:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 [2024-12-05 11:07:40.685929] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:16.153 [2024-12-05 11:07:40.686014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68124 ] 00:24:16.412 [2024-12-05 11:07:40.829124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.412 [2024-12-05 11:07:40.902715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.346 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.346 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:24:17.346 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:17.346 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.346 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:17.346 NVMe0n1 00:24:17.346 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.346 11:07:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:17.346 Running I/O for 10 seconds... 00:24:19.677 8571.00 IOPS, 33.48 MiB/s [2024-12-05T11:07:45.264Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-05T11:07:46.194Z] 8874.67 IOPS, 34.67 MiB/s [2024-12-05T11:07:47.129Z] 8846.75 IOPS, 34.56 MiB/s [2024-12-05T11:07:48.062Z] 8807.40 IOPS, 34.40 MiB/s [2024-12-05T11:07:48.997Z] 8871.33 IOPS, 34.65 MiB/s [2024-12-05T11:07:49.932Z] 8915.29 IOPS, 34.83 MiB/s [2024-12-05T11:07:51.328Z] 8972.00 IOPS, 35.05 MiB/s [2024-12-05T11:07:52.264Z] 9045.44 IOPS, 35.33 MiB/s [2024-12-05T11:07:52.264Z] 9040.80 IOPS, 35.32 MiB/s 00:24:27.612 Latency(us) 00:24:27.612 [2024-12-05T11:07:52.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.612 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:24:27.612 Verification LBA range: start 0x0 length 0x4000 00:24:27.612 NVMe0n1 : 10.07 9068.34 35.42 0.00 0.00 112405.41 19723.22 108852.18 00:24:27.612 [2024-12-05T11:07:52.264Z] =================================================================================================================== 00:24:27.612 [2024-12-05T11:07:52.264Z] Total : 9068.34 35.42 0.00 0.00 112405.41 19723.22 108852.18 00:24:27.612 { 00:24:27.612 "results": [ 00:24:27.612 { 00:24:27.612 "job": "NVMe0n1", 00:24:27.612 "core_mask": "0x1", 00:24:27.612 "workload": "verify", 00:24:27.612 "status": "finished", 00:24:27.612 "verify_range": { 00:24:27.612 "start": 0, 00:24:27.612 "length": 16384 00:24:27.612 }, 00:24:27.612 "queue_depth": 1024, 00:24:27.612 "io_size": 4096, 00:24:27.612 "runtime": 10.071079, 00:24:27.612 "iops": 9068.343123909563, 00:24:27.612 "mibps": 35.42321532777173, 00:24:27.612 "io_failed": 0, 00:24:27.612 "io_timeout": 0, 00:24:27.612 "avg_latency_us": 112405.40705709613, 00:24:27.612 "min_latency_us": 19723.21523809524, 00:24:27.612 "max_latency_us": 108852.17523809524 00:24:27.612 } 00:24:27.612 ], 00:24:27.612 "core_count": 1 00:24:27.612 } 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68124 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68124 ']' 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68124 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68124 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.612 killing process with pid 68124 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68124' 00:24:27.612 Received shutdown signal, test time was about 10.000000 seconds 00:24:27.612 00:24:27.612 Latency(us) 00:24:27.612 [2024-12-05T11:07:52.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.612 [2024-12-05T11:07:52.264Z] =================================================================================================================== 00:24:27.612 [2024-12-05T11:07:52.264Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68124 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68124 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:27.612 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:24:27.870 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:27.870 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:24:27.870 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:27.870 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:27.870 rmmod nvme_tcp 00:24:27.870 rmmod nvme_fabrics 00:24:27.870 rmmod nvme_keyring 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 68068 ']' 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 68068 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68068 ']' 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68068 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68068 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.871 killing process with pid 68068 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68068' 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68068 00:24:27.871 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68068 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:28.129 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:24:28.130 00:24:28.130 real 0m13.987s 00:24:28.130 user 0m23.522s 00:24:28.130 sys 0m2.536s 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.130 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:24:28.130 ************************************ 00:24:28.130 END TEST nvmf_queue_depth 00:24:28.130 ************************************ 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:24:28.389 ************************************ 00:24:28.389 START TEST nvmf_target_multipath 00:24:28.389 ************************************ 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:28.389 * Looking for test storage... 00:24:28.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:24:28.389 11:07:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:28.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.389 --rc genhtml_branch_coverage=1 00:24:28.389 --rc genhtml_function_coverage=1 00:24:28.389 --rc genhtml_legend=1 00:24:28.389 --rc geninfo_all_blocks=1 00:24:28.389 --rc geninfo_unexecuted_blocks=1 00:24:28.389 00:24:28.389 ' 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:28.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.389 --rc genhtml_branch_coverage=1 00:24:28.389 --rc genhtml_function_coverage=1 00:24:28.389 --rc genhtml_legend=1 00:24:28.389 --rc geninfo_all_blocks=1 00:24:28.389 --rc geninfo_unexecuted_blocks=1 00:24:28.389 00:24:28.389 ' 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:28.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.389 --rc genhtml_branch_coverage=1 00:24:28.389 --rc genhtml_function_coverage=1 00:24:28.389 --rc genhtml_legend=1 00:24:28.389 --rc geninfo_all_blocks=1 00:24:28.389 --rc geninfo_unexecuted_blocks=1 00:24:28.389 00:24:28.389 ' 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:28.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.389 --rc genhtml_branch_coverage=1 00:24:28.389 --rc genhtml_function_coverage=1 00:24:28.389 --rc genhtml_legend=1 00:24:28.389 --rc geninfo_all_blocks=1 00:24:28.389 --rc geninfo_unexecuted_blocks=1 00:24:28.389 00:24:28.389 ' 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.389 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:28.390 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:28.390 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:28.649 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:28.650 10.0.0.1 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:28.650 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:28.651 10.0.0.2 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:28.651 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:28.911 10.0.0.3 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:28.911 10.0.0.4 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:28.911 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:28.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:24:28.912 00:24:28.912 --- 10.0.0.1 ping statistics --- 00:24:28.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.912 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:28.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:24:28.912 00:24:28.912 --- 10.0.0.2 ping statistics --- 00:24:28.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.912 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:28.912 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:28.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.141 ms 00:24:28.913 00:24:28.913 --- 10.0.0.3 ping statistics --- 00:24:28.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.913 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:28.913 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:28.913 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:24:28.913 00:24:28.913 --- 10.0.0.4 ping statistics --- 00:24:28.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.913 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # return 0 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:28.913 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:28.914 ' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.914 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # nvmfpid=68510 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # waitforlisten 68510 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68510 ']' 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.172 11:07:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:29.172 [2024-12-05 11:07:53.635309] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:29.172 [2024-12-05 11:07:53.635420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.172 [2024-12-05 11:07:53.792845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.430 [2024-12-05 11:07:53.872330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.430 [2024-12-05 11:07:53.872418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.430 [2024-12-05 11:07:53.872436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.430 [2024-12-05 11:07:53.872451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.430 [2024-12-05 11:07:53.872463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.430 [2024-12-05 11:07:53.873795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.430 [2024-12-05 11:07:53.873908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.430 [2024-12-05 11:07:53.874009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.430 [2024-12-05 11:07:53.874009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.364 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.364 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:30.364 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:30.364 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:30.364 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:30.364 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.364 11:07:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:30.621 [2024-12-05 11:07:55.072135] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.621 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:30.878 Malloc0 00:24:30.878 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:24:31.135 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.400 11:07:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.658 [2024-12-05 11:07:56.220687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.658 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:24:31.915 [2024-12-05 11:07:56.473082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:24:31.915 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:24:32.172 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:24:32.461 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:24:32.461 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:24:32.461 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:32.461 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:32.461 11:07:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:34.364 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:34.365 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:24:34.365 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68655 00:24:34.365 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:24:34.365 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:34.365 [global] 00:24:34.365 thread=1 00:24:34.365 invalidate=1 00:24:34.365 rw=randrw 00:24:34.365 time_based=1 00:24:34.365 runtime=6 00:24:34.365 ioengine=libaio 00:24:34.365 direct=1 00:24:34.365 bs=4096 00:24:34.365 iodepth=128 00:24:34.365 norandommap=0 00:24:34.365 numjobs=1 00:24:34.365 00:24:34.365 verify_dump=1 00:24:34.365 verify_backlog=512 00:24:34.365 verify_state_save=0 00:24:34.365 do_verify=1 00:24:34.365 verify=crc32c-intel 00:24:34.365 [job0] 00:24:34.365 filename=/dev/nvme0n1 00:24:34.624 Could not set queue depth (nvme0n1) 00:24:34.624 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:34.624 fio-3.35 00:24:34.624 Starting 1 thread 00:24:35.559 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:35.865 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:24:36.126 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:36.127 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:24:37.079 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:37.079 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:37.079 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:37.079 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.343 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:37.907 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:24:38.908 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:38.908 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:38.908 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:38.908 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68655 00:24:40.805 00:24:40.805 job0: (groupid=0, jobs=1): err= 0: pid=68682: Thu Dec 5 11:08:05 2024 00:24:40.805 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(259MiB/6004msec) 00:24:40.805 slat (usec): min=6, max=6037, avg=50.16, stdev=224.02 00:24:40.805 clat (usec): min=632, max=15415, avg=7747.12, stdev=1388.81 00:24:40.805 lat (usec): min=1085, max=15811, avg=7797.28, stdev=1399.30 00:24:40.805 clat percentiles (usec): 00:24:40.805 | 1.00th=[ 4490], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 6849], 00:24:40.805 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7767], 00:24:40.805 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[10552], 00:24:40.805 | 99.00th=[12125], 99.50th=[12780], 99.90th=[14091], 99.95th=[14746], 00:24:40.805 | 99.99th=[15401] 00:24:40.805 bw ( KiB/s): min= 8832, max=34384, per=54.84%, avg=24224.00, stdev=6559.78, samples=11 00:24:40.805 iops : min= 2208, max= 8596, avg=6056.00, stdev=1639.95, samples=11 00:24:40.805 write: IOPS=6858, BW=26.8MiB/s (28.1MB/s)(145MiB/5411msec); 0 zone resets 00:24:40.805 slat (usec): min=11, max=2630, avg=61.53, stdev=150.09 00:24:40.805 clat (usec): min=523, max=15263, avg=6678.97, stdev=1188.06 00:24:40.805 lat (usec): min=565, max=15287, avg=6740.50, stdev=1193.94 00:24:40.805 clat percentiles (usec): 00:24:40.805 | 1.00th=[ 3556], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 5997], 00:24:40.805 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6587], 60.00th=[ 6783], 00:24:40.805 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7963], 95.00th=[ 8979], 00:24:40.805 | 99.00th=[10421], 99.50th=[11207], 99.90th=[12125], 99.95th=[12649], 00:24:40.805 | 99.99th=[13304] 00:24:40.805 bw ( KiB/s): min= 9496, max=33648, per=88.38%, avg=24246.55, stdev=6172.63, samples=11 00:24:40.805 iops : min= 2374, max= 8412, avg=6061.64, stdev=1543.16, samples=11 00:24:40.805 lat (usec) : 750=0.01%, 1000=0.01% 00:24:40.805 lat (msec) : 2=0.03%, 4=1.01%, 10=93.69%, 20=5.26% 00:24:40.805 cpu : usr=5.73%, sys=23.97%, ctx=6756, majf=0, minf=151 00:24:40.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:40.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:40.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:40.805 issued rwts: total=66300,37110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:40.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:40.805 00:24:40.805 Run status group 0 (all jobs): 00:24:40.805 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=259MiB (272MB), run=6004-6004msec 00:24:40.805 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=145MiB (152MB), run=5411-5411msec 00:24:40.805 00:24:40.805 Disk stats (read/write): 00:24:40.805 nvme0n1: ios=65238/36509, merge=0/0, ticks=473084/227294, in_queue=700378, util=98.61% 00:24:40.805 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:41.110 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:24:41.368 11:08:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:24:42.738 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:42.738 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:42.738 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:42.738 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:24:42.738 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68810 00:24:42.738 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:42.738 11:08:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:24:42.738 [global] 00:24:42.738 thread=1 00:24:42.738 invalidate=1 00:24:42.738 rw=randrw 00:24:42.738 time_based=1 00:24:42.738 runtime=6 00:24:42.738 ioengine=libaio 00:24:42.738 direct=1 00:24:42.738 bs=4096 00:24:42.738 iodepth=128 00:24:42.738 norandommap=0 00:24:42.738 numjobs=1 00:24:42.738 00:24:42.738 verify_dump=1 00:24:42.738 verify_backlog=512 00:24:42.738 verify_state_save=0 00:24:42.738 do_verify=1 00:24:42.738 verify=crc32c-intel 00:24:42.738 [job0] 00:24:42.738 filename=/dev/nvme0n1 00:24:42.738 Could not set queue depth (nvme0n1) 00:24:42.738 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:42.738 fio-3.35 00:24:42.738 Starting 1 thread 00:24:43.672 11:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:43.672 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:43.931 11:08:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:24:45.306 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:45.306 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:45.306 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:45.306 11:08:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:45.563 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:45.822 11:08:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:24:46.756 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:46.756 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:46.756 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:46.756 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68810 00:24:48.657 00:24:48.657 job0: (groupid=0, jobs=1): err= 0: pid=68831: Thu Dec 5 11:08:13 2024 00:24:48.657 read: IOPS=11.7k, BW=45.7MiB/s (48.0MB/s)(275MiB/6003msec) 00:24:48.657 slat (usec): min=5, max=6572, avg=42.71, stdev=209.27 00:24:48.657 clat (usec): min=191, max=21702, avg=7473.45, stdev=2523.98 00:24:48.657 lat (usec): min=201, max=21712, avg=7516.16, stdev=2533.23 00:24:48.657 clat percentiles (usec): 00:24:48.657 | 1.00th=[ 873], 5.00th=[ 1844], 10.00th=[ 4293], 20.00th=[ 6587], 00:24:48.657 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7767], 00:24:48.657 | 70.00th=[ 8291], 80.00th=[ 8979], 90.00th=[10290], 95.00th=[11469], 00:24:48.657 | 99.00th=[14353], 99.50th=[15926], 99.90th=[18220], 99.95th=[19268], 00:24:48.657 | 99.99th=[20055] 00:24:48.657 bw ( KiB/s): min= 8168, max=34272, per=52.45%, avg=24573.82, stdev=8474.18, samples=11 00:24:48.657 iops : min= 2042, max= 8568, avg=6143.45, stdev=2118.54, samples=11 00:24:48.657 write: IOPS=6985, BW=27.3MiB/s (28.6MB/s)(145MiB/5315msec); 0 zone resets 00:24:48.657 slat (usec): min=10, max=4067, avg=54.52, stdev=140.25 00:24:48.657 clat (usec): min=157, max=18121, avg=6371.67, stdev=2438.42 00:24:48.657 lat (usec): min=179, max=18146, avg=6426.18, stdev=2445.08 00:24:48.657 clat percentiles (usec): 00:24:48.657 | 1.00th=[ 611], 5.00th=[ 1106], 10.00th=[ 2442], 20.00th=[ 5080], 00:24:48.657 | 30.00th=[ 5997], 40.00th=[ 6325], 50.00th=[ 6587], 60.00th=[ 6915], 00:24:48.657 | 70.00th=[ 7177], 80.00th=[ 7767], 90.00th=[ 9110], 95.00th=[10159], 00:24:48.657 | 99.00th=[11994], 99.50th=[13435], 99.90th=[16450], 99.95th=[17433], 00:24:48.657 | 99.99th=[17957] 00:24:48.657 bw ( KiB/s): min= 8488, max=35192, per=88.11%, avg=24619.64, stdev=8342.53, samples=11 00:24:48.657 iops : min= 2122, max= 8798, avg=6154.91, stdev=2085.63, samples=11 00:24:48.657 lat (usec) : 250=0.01%, 500=0.30%, 750=0.82%, 1000=1.36% 00:24:48.657 lat (msec) : 2=4.17%, 4=4.61%, 10=79.20%, 20=9.52%, 50=0.01% 00:24:48.657 cpu : usr=5.51%, sys=23.49%, ctx=8447, majf=0, minf=114 00:24:48.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:48.657 issued rwts: total=70307,37126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.657 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:48.657 00:24:48.657 Run status group 0 (all jobs): 00:24:48.657 READ: bw=45.7MiB/s (48.0MB/s), 45.7MiB/s-45.7MiB/s (48.0MB/s-48.0MB/s), io=275MiB (288MB), run=6003-6003msec 00:24:48.657 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=145MiB (152MB), run=5315-5315msec 00:24:48.657 00:24:48.657 Disk stats (read/write): 00:24:48.657 nvme0n1: ios=69518/36269, merge=0/0, ticks=487230/216767, in_queue=703997, util=98.58% 00:24:48.657 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:48.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:24:48.914 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:49.173 rmmod nvme_tcp 00:24:49.173 rmmod nvme_fabrics 00:24:49.173 rmmod nvme_keyring 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n 68510 ']' 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@337 -- # killprocess 68510 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68510 ']' 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68510 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68510 00:24:49.173 killing process with pid 68510 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68510' 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68510 00:24:49.173 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68510 00:24:49.432 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:49.432 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:24:49.432 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:24:49.432 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:49.432 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:49.432 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:49.432 11:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:49.432 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:49.432 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:49.432 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:49.432 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:49.432 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:49.433 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:49.433 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:49.690 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:24:49.691 00:24:49.691 real 0m21.362s 00:24:49.691 user 1m21.781s 00:24:49.691 sys 0m8.106s 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:49.691 ************************************ 00:24:49.691 END TEST nvmf_target_multipath 00:24:49.691 ************************************ 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:24:49.691 ************************************ 00:24:49.691 START TEST nvmf_zcopy 00:24:49.691 ************************************ 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:49.691 * Looking for test storage... 00:24:49.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:24:49.691 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:49.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.950 --rc genhtml_branch_coverage=1 00:24:49.950 --rc genhtml_function_coverage=1 00:24:49.950 --rc genhtml_legend=1 00:24:49.950 --rc geninfo_all_blocks=1 00:24:49.950 --rc geninfo_unexecuted_blocks=1 00:24:49.950 00:24:49.950 ' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:49.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.950 --rc genhtml_branch_coverage=1 00:24:49.950 --rc genhtml_function_coverage=1 00:24:49.950 --rc genhtml_legend=1 00:24:49.950 --rc geninfo_all_blocks=1 00:24:49.950 --rc geninfo_unexecuted_blocks=1 00:24:49.950 00:24:49.950 ' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:49.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.950 --rc genhtml_branch_coverage=1 00:24:49.950 --rc genhtml_function_coverage=1 00:24:49.950 --rc genhtml_legend=1 00:24:49.950 --rc geninfo_all_blocks=1 00:24:49.950 --rc geninfo_unexecuted_blocks=1 00:24:49.950 00:24:49.950 ' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:49.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.950 --rc genhtml_branch_coverage=1 00:24:49.950 --rc genhtml_function_coverage=1 00:24:49.950 --rc genhtml_legend=1 00:24:49.950 --rc geninfo_all_blocks=1 00:24:49.950 --rc geninfo_unexecuted_blocks=1 00:24:49.950 00:24:49.950 ' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.950 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:49.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@223 -- # create_target_ns 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:49.951 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target0 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:49.952 10.0.0.1 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:49.952 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:50.212 10.0.0.2 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:50.212 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772163 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:50.213 10.0.0.3 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772164 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:50.213 10.0.0.4 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:50.213 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:50.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:24:50.214 00:24:50.214 --- 10.0.0.1 ping statistics --- 00:24:50.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.214 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:50.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:24:50.214 00:24:50.214 --- 10.0.0.2 ping statistics --- 00:24:50.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.214 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:50.214 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:50.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:50.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:24:50.474 00:24:50.474 --- 10.0.0.3 ping statistics --- 00:24:50.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.474 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:50.474 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:50.474 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.120 ms 00:24:50.474 00:24:50.474 --- 10.0.0.4 ping statistics --- 00:24:50.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.474 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # return 0 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.474 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:50.475 ' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=69171 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 69171 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 69171 ']' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.475 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.475 [2024-12-05 11:08:15.055643] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:50.475 [2024-12-05 11:08:15.055778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.733 [2024-12-05 11:08:15.200508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.733 [2024-12-05 11:08:15.256634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.733 [2024-12-05 11:08:15.256697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.733 [2024-12-05 11:08:15.256710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.733 [2024-12-05 11:08:15.256720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.733 [2024-12-05 11:08:15.256728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.733 [2024-12-05 11:08:15.257058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.733 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.733 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:24:50.733 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:50.733 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:50.733 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.991 [2024-12-05 11:08:15.411069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.991 [2024-12-05 11:08:15.431227] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.991 malloc0 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:50.991 { 00:24:50.991 "params": { 00:24:50.991 "name": "Nvme$subsystem", 00:24:50.991 "trtype": "$TEST_TRANSPORT", 00:24:50.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:50.991 "adrfam": "ipv4", 00:24:50.991 "trsvcid": "$NVMF_PORT", 00:24:50.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:50.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:50.991 "hdgst": ${hdgst:-false}, 00:24:50.991 "ddgst": ${ddgst:-false} 00:24:50.991 }, 00:24:50.991 "method": "bdev_nvme_attach_controller" 00:24:50.991 } 00:24:50.991 EOF 00:24:50.991 )") 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:24:50.991 11:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:50.991 "params": { 00:24:50.991 "name": "Nvme1", 00:24:50.991 "trtype": "tcp", 00:24:50.991 "traddr": "10.0.0.2", 00:24:50.991 "adrfam": "ipv4", 00:24:50.991 "trsvcid": "4420", 00:24:50.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:50.991 "hdgst": false, 00:24:50.991 "ddgst": false 00:24:50.991 }, 00:24:50.991 "method": "bdev_nvme_attach_controller" 00:24:50.991 }' 00:24:50.991 [2024-12-05 11:08:15.528500] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:50.991 [2024-12-05 11:08:15.529181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69210 ] 00:24:51.250 [2024-12-05 11:08:15.681950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.250 [2024-12-05 11:08:15.762204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.508 Running I/O for 10 seconds... 00:24:53.379 6617.00 IOPS, 51.70 MiB/s [2024-12-05T11:08:18.966Z] 6709.50 IOPS, 52.42 MiB/s [2024-12-05T11:08:20.343Z] 6751.33 IOPS, 52.74 MiB/s [2024-12-05T11:08:21.279Z] 6759.25 IOPS, 52.81 MiB/s [2024-12-05T11:08:22.216Z] 6778.60 IOPS, 52.96 MiB/s [2024-12-05T11:08:23.150Z] 6786.67 IOPS, 53.02 MiB/s [2024-12-05T11:08:24.084Z] 6748.29 IOPS, 52.72 MiB/s [2024-12-05T11:08:25.043Z] 6727.25 IOPS, 52.56 MiB/s [2024-12-05T11:08:25.974Z] 6718.78 IOPS, 52.49 MiB/s [2024-12-05T11:08:25.974Z] 6654.50 IOPS, 51.99 MiB/s 00:25:01.322 Latency(us) 00:25:01.322 [2024-12-05T11:08:25.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.322 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:25:01.322 Verification LBA range: start 0x0 length 0x1000 00:25:01.322 Nvme1n1 : 10.01 6656.44 52.00 0.00 0.00 19171.43 2652.65 27962.03 00:25:01.322 [2024-12-05T11:08:25.974Z] =================================================================================================================== 00:25:01.322 [2024-12-05T11:08:25.974Z] Total : 6656.44 52.00 0.00 0.00 19171.43 2652.65 27962.03 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69329 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:01.580 { 00:25:01.580 "params": { 00:25:01.580 "name": "Nvme$subsystem", 00:25:01.580 "trtype": "$TEST_TRANSPORT", 00:25:01.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:01.580 "adrfam": "ipv4", 00:25:01.580 "trsvcid": "$NVMF_PORT", 00:25:01.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:01.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:01.580 "hdgst": ${hdgst:-false}, 00:25:01.580 "ddgst": ${ddgst:-false} 00:25:01.580 }, 00:25:01.580 "method": "bdev_nvme_attach_controller" 00:25:01.580 } 00:25:01.580 EOF 00:25:01.580 )") 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:25:01.580 [2024-12-05 11:08:26.177219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.580 [2024-12-05 11:08:26.177279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:25:01.580 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:01.580 "params": { 00:25:01.580 "name": "Nvme1", 00:25:01.580 "trtype": "tcp", 00:25:01.580 "traddr": "10.0.0.2", 00:25:01.580 "adrfam": "ipv4", 00:25:01.580 "trsvcid": "4420", 00:25:01.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.580 "hdgst": false, 00:25:01.580 "ddgst": false 00:25:01.580 }, 00:25:01.580 "method": "bdev_nvme_attach_controller" 00:25:01.580 }' 00:25:01.580 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.580 [2024-12-05 11:08:26.189185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.580 [2024-12-05 11:08:26.189221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.580 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.580 [2024-12-05 11:08:26.201179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.580 [2024-12-05 11:08:26.201215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.580 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.580 [2024-12-05 11:08:26.213179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.580 [2024-12-05 11:08:26.213215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.580 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.580 [2024-12-05 11:08:26.225228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.580 [2024-12-05 11:08:26.225277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.580 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.237246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.237300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 [2024-12-05 11:08:26.239984] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:01.843 [2024-12-05 11:08:26.240100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69329 ] 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.249217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.249265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.261196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.261231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.273194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.273228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.285209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.285250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.297246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.297297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.309232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.309273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.321239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.321286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.843 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.843 [2024-12-05 11:08:26.333235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.843 [2024-12-05 11:08:26.333274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.345234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.345274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.357227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.357261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.369227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.369261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.381231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.381269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.393238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.393275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.402866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.844 [2024-12-05 11:08:26.405233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.405272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.417243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.417280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.429330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.429387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.441387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.441479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.453318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.453379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.465272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.465317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 [2024-12-05 11:08:26.466226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.477273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.477318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:01.844 [2024-12-05 11:08:26.489268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:01.844 [2024-12-05 11:08:26.489313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:01.844 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.501266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.501304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.513271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.513307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.525289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.525336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.537299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.537344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.549293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.549339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.561355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.561424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.573805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.573871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.585827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.585899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.597803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.597858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.609801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.609857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.621806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.621864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.633833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.633886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 Running I/O for 5 seconds... 00:25:02.103 [2024-12-05 11:08:26.645815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.645852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.662889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.662944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.679675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.679740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.696046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.696114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.705995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.706059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.720320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.720389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.735812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.735878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.103 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.103 [2024-12-05 11:08:26.753484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.103 [2024-12-05 11:08:26.753561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.362 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.362 [2024-12-05 11:08:26.767686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.362 [2024-12-05 11:08:26.767752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.362 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.362 [2024-12-05 11:08:26.784881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.362 [2024-12-05 11:08:26.784951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.362 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.362 [2024-12-05 11:08:26.801223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.362 [2024-12-05 11:08:26.801293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.362 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.362 [2024-12-05 11:08:26.819478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.362 [2024-12-05 11:08:26.819557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.362 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.362 [2024-12-05 11:08:26.834718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.362 [2024-12-05 11:08:26.834776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.362 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.362 [2024-12-05 11:08:26.851772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.362 [2024-12-05 11:08:26.851840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.362 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.362 [2024-12-05 11:08:26.867007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.362 [2024-12-05 11:08:26.867073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.883945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.884012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.899859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.899933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.911395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.911459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.928849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.928919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.944285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.944351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.960245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.960312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.977806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.977879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:26.992380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:26.992469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.363 [2024-12-05 11:08:27.010677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.363 [2024-12-05 11:08:27.010769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.363 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.027751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.027847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.044504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.044602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.061705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.061790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.078804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.078884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.096554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.096653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.118065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.118178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.128304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.128364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.142635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.142708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.158151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.158219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.621 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.621 [2024-12-05 11:08:27.176781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.621 [2024-12-05 11:08:27.176848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.622 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.622 [2024-12-05 11:08:27.190866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.622 [2024-12-05 11:08:27.190928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.622 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.622 [2024-12-05 11:08:27.208843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.622 [2024-12-05 11:08:27.208911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.622 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.622 [2024-12-05 11:08:27.223217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.622 [2024-12-05 11:08:27.223285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.622 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.622 [2024-12-05 11:08:27.239152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.622 [2024-12-05 11:08:27.239219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.622 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.622 [2024-12-05 11:08:27.256277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.622 [2024-12-05 11:08:27.256347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.622 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.622 [2024-12-05 11:08:27.273491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.622 [2024-12-05 11:08:27.273575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.288484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.288547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.306877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.306941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.320552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.320619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.336758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.336822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.352702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.352766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.370568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.370647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.386597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.386674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.404463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.404527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.420548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.420628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.431873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.431962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.448456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.448521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.464383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.464452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.481132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.481196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.498145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.498210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.514369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.514431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:02.885 [2024-12-05 11:08:27.532855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:02.885 [2024-12-05 11:08:27.532922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:02.885 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.144 [2024-12-05 11:08:27.547446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.144 [2024-12-05 11:08:27.547511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.144 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.144 [2024-12-05 11:08:27.565173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.144 [2024-12-05 11:08:27.565238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.144 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.144 [2024-12-05 11:08:27.579691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.144 [2024-12-05 11:08:27.579757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.144 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.144 [2024-12-05 11:08:27.596409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.144 [2024-12-05 11:08:27.596475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.144 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.144 [2024-12-05 11:08:27.610980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.144 [2024-12-05 11:08:27.611045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.144 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.144 [2024-12-05 11:08:27.627528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.144 [2024-12-05 11:08:27.627598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.144 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.144 [2024-12-05 11:08:27.643152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.144 [2024-12-05 11:08:27.643222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.144 12080.00 IOPS, 94.38 MiB/s [2024-12-05T11:08:27.796Z] 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.661429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.661500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.676888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.676948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.693429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.693485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.710457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.710523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.726392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.726477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.743206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.743295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.758757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.758822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.770186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.770247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.145 [2024-12-05 11:08:27.787194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.145 [2024-12-05 11:08:27.787258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.145 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.802367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.802430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.813646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.813705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.830446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.830511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.847721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.847789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.865452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.865547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.879676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.879741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.895820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.895887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.912236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.912298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.929904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.929966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.944465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.944525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.961100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.961155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.976988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.977047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:27.993606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:27.993676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:28.009896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:28.009962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:28.027018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:28.027101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.404 [2024-12-05 11:08:28.043690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.404 [2024-12-05 11:08:28.043759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.404 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.061158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.061231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.076816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.076882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.089352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.089425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.099185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.099245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.113891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.113977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.129881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.129951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.147396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.147487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.163691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.163771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.180063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.180135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.196514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.196580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.214134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.214202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.229457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.229525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.246056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.246129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.663 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.663 [2024-12-05 11:08:28.262718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.663 [2024-12-05 11:08:28.262786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.664 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.664 [2024-12-05 11:08:28.280561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.664 [2024-12-05 11:08:28.280647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.664 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.664 [2024-12-05 11:08:28.294578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.664 [2024-12-05 11:08:28.294658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.664 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.664 [2024-12-05 11:08:28.311389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.664 [2024-12-05 11:08:28.311459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.664 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.327954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.328020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.344377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.344446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.361080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.361155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.377345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.377430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.387739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.387800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.402203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.402272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.418742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.418812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.435046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.435112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.451645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.451711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.467747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.467818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.485166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.485235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.500797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.500861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.923 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.923 [2024-12-05 11:08:28.512257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.923 [2024-12-05 11:08:28.512320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.924 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.924 [2024-12-05 11:08:28.528794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.924 [2024-12-05 11:08:28.528863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.924 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.924 [2024-12-05 11:08:28.544677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.924 [2024-12-05 11:08:28.544743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.924 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.924 [2024-12-05 11:08:28.562216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.924 [2024-12-05 11:08:28.562287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:03.924 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:03.924 [2024-12-05 11:08:28.576445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:03.924 [2024-12-05 11:08:28.576512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.227 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.227 [2024-12-05 11:08:28.592736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.227 [2024-12-05 11:08:28.592799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.227 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.227 [2024-12-05 11:08:28.609249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.227 [2024-12-05 11:08:28.609320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.227 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.227 [2024-12-05 11:08:28.626997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.227 [2024-12-05 11:08:28.627091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.227 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.227 [2024-12-05 11:08:28.642156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.642229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 12188.00 IOPS, 95.22 MiB/s [2024-12-05T11:08:28.880Z] [2024-12-05 11:08:28.658569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.658659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.669937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.670004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.688745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.688818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.703785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.703844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.719413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.719479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.736972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.737040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.751669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.751734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.768676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.768744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.784605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.784673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.803724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.803800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.818371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.818447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.836928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.837019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.851751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.851825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.862349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.862428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.228 [2024-12-05 11:08:28.876681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.228 [2024-12-05 11:08:28.876750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.228 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.488 [2024-12-05 11:08:28.893515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.488 [2024-12-05 11:08:28.893599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.488 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.488 [2024-12-05 11:08:28.908997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:28.909064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:28.925562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:28.925638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:28.941228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:28.941296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:28.958046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:28.958114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:28.973922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:28.973988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:28.990349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:28.990423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.006991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.007060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.023512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.023576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.040053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.040123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.056303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.056363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.076805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.076868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.093348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.093406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.110960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.111059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.489 [2024-12-05 11:08:29.132204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.489 [2024-12-05 11:08:29.132273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.489 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.148483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.148550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.167451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.167517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.181788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.181853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.193089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.193153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.201884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.201935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.216686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.216758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.237935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.238007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.258515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.258601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.274813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.274879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.284706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.284763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.748 [2024-12-05 11:08:29.299066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.748 [2024-12-05 11:08:29.299151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.748 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.749 [2024-12-05 11:08:29.308801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.749 [2024-12-05 11:08:29.308863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.749 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.749 [2024-12-05 11:08:29.323085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.749 [2024-12-05 11:08:29.323155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.749 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.749 [2024-12-05 11:08:29.340020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.749 [2024-12-05 11:08:29.340088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.749 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.749 [2024-12-05 11:08:29.356448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.749 [2024-12-05 11:08:29.356509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.749 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.749 [2024-12-05 11:08:29.374521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.749 [2024-12-05 11:08:29.374599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.749 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.749 [2024-12-05 11:08:29.389947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:04.749 [2024-12-05 11:08:29.390013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.749 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.408496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.408561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.422567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.422636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.439751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.439813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.455224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.455284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.464874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.464934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.479056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.479130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.496090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.496163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.512308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.512369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.523812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.523869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.540796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.540859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.555136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.555199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.571701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.571765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.587582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.587650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.604567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.604640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.621209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.621273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 [2024-12-05 11:08:29.637317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.637383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.008 12263.00 IOPS, 95.80 MiB/s [2024-12-05T11:08:29.660Z] [2024-12-05 11:08:29.654015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.008 [2024-12-05 11:08:29.654075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.008 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.670432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.670492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.687684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.687749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.703169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.703227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.719280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.719333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.735410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.735463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.746792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.746851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.764356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.764419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.780765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.780827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.798131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.798226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.814976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.815036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.831454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.831521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.849509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.849572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.860850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.860906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.877232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.877295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.267 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.267 [2024-12-05 11:08:29.893825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.267 [2024-12-05 11:08:29.893887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.268 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.268 [2024-12-05 11:08:29.910372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.268 [2024-12-05 11:08:29.910434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.268 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:29.926396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:29.926451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:29.943727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:29.943785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:29.960246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:29.960306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:29.980993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:29.981256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:29.998743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:29.999008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:30.014368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:30.014641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:30.025184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:30.025460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:30.038853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:30.039127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:30.054933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:30.055279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.526 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.526 [2024-12-05 11:08:30.070840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.526 [2024-12-05 11:08:30.071140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.527 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.527 [2024-12-05 11:08:30.088712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.527 [2024-12-05 11:08:30.088775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.527 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.527 [2024-12-05 11:08:30.104529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.527 [2024-12-05 11:08:30.104722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.527 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.527 [2024-12-05 11:08:30.115812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.527 [2024-12-05 11:08:30.116038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.527 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.527 [2024-12-05 11:08:30.130180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.527 [2024-12-05 11:08:30.130390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.527 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.527 [2024-12-05 11:08:30.146712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.527 [2024-12-05 11:08:30.146931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.527 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.527 [2024-12-05 11:08:30.163923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.527 [2024-12-05 11:08:30.164199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.527 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.527 [2024-12-05 11:08:30.179456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.527 [2024-12-05 11:08:30.179721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.192046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.192337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.209077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.209141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.225568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.225653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.243933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.244204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.257574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.257848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.272862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.273107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.282522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.282790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.296677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.296958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.785 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.785 [2024-12-05 11:08:30.313158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.785 [2024-12-05 11:08:30.313451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.323244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.323526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.338629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.338705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.349936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.350187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.367061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.367127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.383524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.383600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.399695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.399972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.417192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.417473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:05.786 [2024-12-05 11:08:30.433712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:05.786 [2024-12-05 11:08:30.433982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:05.786 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.044 [2024-12-05 11:08:30.450102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.044 [2024-12-05 11:08:30.450327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.460822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.461070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.476263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.476537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.492709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.492791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.503615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.503687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.520170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.520447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.537015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.537287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.553446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.553727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.565507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.565785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.585532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.585810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.599998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.600247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.616383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.616451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.633160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.633224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 12292.25 IOPS, 96.03 MiB/s [2024-12-05T11:08:30.697Z] [2024-12-05 11:08:30.649778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.650040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.666007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.666272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.045 [2024-12-05 11:08:30.683628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.045 [2024-12-05 11:08:30.683923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.045 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.699645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.699950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.715913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.716211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.731947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.732241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.743473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.743732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.760451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.760643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.775773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.775827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.792376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.792617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.807327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.807579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.823914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.824139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.833347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.833560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.848414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.848668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.865420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.865713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.881071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.881315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.898109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.898372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.915534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.915608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.929296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.929353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.304 [2024-12-05 11:08:30.946955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.304 [2024-12-05 11:08:30.947189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.304 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.563 [2024-12-05 11:08:30.961268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.563 [2024-12-05 11:08:30.961505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.563 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.563 [2024-12-05 11:08:30.977326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.563 [2024-12-05 11:08:30.977631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:30.986388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:30.986651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.002249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.002509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.019077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.019332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.034327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.034617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.051053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.051327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.067611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.067671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.085215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.085451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.101471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.101732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.117974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.118246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.133887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.134152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.145689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.145946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.161997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.162250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.179800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.180035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.194782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.194974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.564 [2024-12-05 11:08:31.206225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.564 [2024-12-05 11:08:31.206418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.564 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.823 [2024-12-05 11:08:31.222907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.823 [2024-12-05 11:08:31.223122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.823 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.823 [2024-12-05 11:08:31.238917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.823 [2024-12-05 11:08:31.239197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.823 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.823 [2024-12-05 11:08:31.250324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.823 [2024-12-05 11:08:31.250576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.823 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.823 [2024-12-05 11:08:31.266944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.823 [2024-12-05 11:08:31.267207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.823 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.823 [2024-12-05 11:08:31.283078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.823 [2024-12-05 11:08:31.283333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.823 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.823 [2024-12-05 11:08:31.294579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.823 [2024-12-05 11:08:31.294858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.311791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.312125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.327986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.328243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.344906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.344962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.362629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.362686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.379411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.379665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.390699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.390937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.407756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.408035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.423139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.423202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.439761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.440040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.457370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.457637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:06.824 [2024-12-05 11:08:31.472378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:06.824 [2024-12-05 11:08:31.472641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:06.824 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.489427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.489688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.504289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.504574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.521381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.521661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.531220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.531426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.544638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.544876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.561093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.561351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.577761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.577821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.599109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.599357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.616223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.616459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.632554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.632799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 12379.00 IOPS, 96.71 MiB/s [2024-12-05T11:08:31.735Z] [2024-12-05 11:08:31.649765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.650034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 00:25:07.083 Latency(us) 00:25:07.083 [2024-12-05T11:08:31.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.083 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:25:07.083 Nvme1n1 : 5.01 12380.90 96.73 0.00 0.00 10326.20 4181.82 22843.98 00:25:07.083 [2024-12-05T11:08:31.735Z] =================================================================================================================== 00:25:07.083 [2024-12-05T11:08:31.735Z] Total : 12380.90 96.73 0.00 0.00 10326.20 4181.82 22843.98 00:25:07.083 [2024-12-05 11:08:31.661317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.661529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.083 [2024-12-05 11:08:31.673314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.083 [2024-12-05 11:08:31.673516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.083 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.084 [2024-12-05 11:08:31.685317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.084 [2024-12-05 11:08:31.685580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.084 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.084 [2024-12-05 11:08:31.697347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.084 [2024-12-05 11:08:31.697604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.084 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.084 [2024-12-05 11:08:31.709365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.084 [2024-12-05 11:08:31.709680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.084 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.084 [2024-12-05 11:08:31.721349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.084 [2024-12-05 11:08:31.721736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.084 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.084 [2024-12-05 11:08:31.733386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.084 [2024-12-05 11:08:31.733767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.745342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.745578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.757333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.757568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.769346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.769573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.781330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.781490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.793346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.793580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.809327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.809525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.817320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.817472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 [2024-12-05 11:08:31.829333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:07.343 [2024-12-05 11:08:31.829363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:07.343 2024/12/05 11:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:07.343 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69329) - No such process 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69329 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:07.343 delay0 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.343 11:08:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:25:07.757 [2024-12-05 11:08:32.024711] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:14.316 Initializing NVMe Controllers 00:25:14.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:14.316 Initialization complete. Launching workers. 00:25:14.316 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 57 00:25:14.316 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 333, failed to submit 44 00:25:14.316 success 165, unsuccessful 168, failed 0 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:14.316 rmmod nvme_tcp 00:25:14.316 rmmod nvme_fabrics 00:25:14.316 rmmod nvme_keyring 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 69171 ']' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 69171 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 69171 ']' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 69171 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69171 00:25:14.316 killing process with pid 69171 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69171' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 69171 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 69171 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:14.316 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:14.317 00:25:14.317 real 0m24.494s 00:25:14.317 user 0m38.473s 00:25:14.317 sys 0m7.880s 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:14.317 ************************************ 00:25:14.317 END TEST nvmf_zcopy 00:25:14.317 ************************************ 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:14.317 ************************************ 00:25:14.317 START TEST nvmf_nmic 00:25:14.317 ************************************ 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:25:14.317 * Looking for test storage... 00:25:14.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.317 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:25:14.579 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:25:14.579 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.579 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:25:14.579 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.580 --rc genhtml_branch_coverage=1 00:25:14.580 --rc genhtml_function_coverage=1 00:25:14.580 --rc genhtml_legend=1 00:25:14.580 --rc geninfo_all_blocks=1 00:25:14.580 --rc geninfo_unexecuted_blocks=1 00:25:14.580 00:25:14.580 ' 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.580 --rc genhtml_branch_coverage=1 00:25:14.580 --rc genhtml_function_coverage=1 00:25:14.580 --rc genhtml_legend=1 00:25:14.580 --rc geninfo_all_blocks=1 00:25:14.580 --rc geninfo_unexecuted_blocks=1 00:25:14.580 00:25:14.580 ' 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.580 --rc genhtml_branch_coverage=1 00:25:14.580 --rc genhtml_function_coverage=1 00:25:14.580 --rc genhtml_legend=1 00:25:14.580 --rc geninfo_all_blocks=1 00:25:14.580 --rc geninfo_unexecuted_blocks=1 00:25:14.580 00:25:14.580 ' 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.580 --rc genhtml_branch_coverage=1 00:25:14.580 --rc genhtml_function_coverage=1 00:25:14.580 --rc genhtml_legend=1 00:25:14.580 --rc geninfo_all_blocks=1 00:25:14.580 --rc geninfo_unexecuted_blocks=1 00:25:14.580 00:25:14.580 ' 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:14.580 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:14.580 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@223 -- # create_target_ns 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:14.580 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target0 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:14.581 10.0.0.1 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:14.581 10.0.0.2 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:14.581 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target1 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:14.582 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772163 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:14.842 10.0.0.3 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772164 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:14.842 10.0.0.4 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:14.842 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:14.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:25:14.843 00:25:14.843 --- 10.0.0.1 ping statistics --- 00:25:14.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.843 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:14.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:25:14.843 00:25:14.843 --- 10.0.0.2 ping statistics --- 00:25:14.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.843 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:14.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:14.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:25:14.843 00:25:14.843 --- 10.0.0.3 ping statistics --- 00:25:14.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.843 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:25:14.843 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:14.844 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:14.844 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:25:14.844 00:25:14.844 --- 10.0.0.4 ping statistics --- 00:25:14.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.844 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # return 0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:14.844 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:15.103 ' 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=69707 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 69707 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69707 ']' 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.103 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.103 [2024-12-05 11:08:39.591415] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:15.103 [2024-12-05 11:08:39.591533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.103 [2024-12-05 11:08:39.751166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.360 [2024-12-05 11:08:39.831568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.360 [2024-12-05 11:08:39.831661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.360 [2024-12-05 11:08:39.831678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.360 [2024-12-05 11:08:39.831691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.360 [2024-12-05 11:08:39.831702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.360 [2024-12-05 11:08:39.832878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.360 [2024-12-05 11:08:39.832966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.360 [2024-12-05 11:08:39.833033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.360 [2024-12-05 11:08:39.833037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.360 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.360 [2024-12-05 11:08:40.012548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 Malloc0 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 [2024-12-05 11:08:40.070686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 test case1: single bdev can't be used in multiple subsystems 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 [2024-12-05 11:08:40.094432] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:25:15.618 [2024-12-05 11:08:40.094477] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:25:15.618 [2024-12-05 11:08:40.094491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:15.618 2024/12/05 11:08:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:15.618 request: 00:25:15.618 { 00:25:15.618 "method": "nvmf_subsystem_add_ns", 00:25:15.618 "params": { 00:25:15.618 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:25:15.618 "namespace": { 00:25:15.618 "bdev_name": "Malloc0", 00:25:15.618 "no_auto_visible": false, 00:25:15.618 "hide_metadata": false 00:25:15.618 } 00:25:15.618 } 00:25:15.618 } 00:25:15.618 Got JSON-RPC error response 00:25:15.618 GoRPCClient: error on JSON-RPC call 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:25:15.618 Adding namespace failed - expected result. 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:25:15.618 test case2: host connect to nvmf target in multiple paths 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 [2024-12-05 11:08:40.106675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.618 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:15.877 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:25:15.877 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:25:15.877 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:25:15.877 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.877 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:15.877 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:25:18.404 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:18.404 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:18.404 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:18.404 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:18.404 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.404 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:25:18.404 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:18.404 [global] 00:25:18.404 thread=1 00:25:18.404 invalidate=1 00:25:18.404 rw=write 00:25:18.404 time_based=1 00:25:18.404 runtime=1 00:25:18.404 ioengine=libaio 00:25:18.404 direct=1 00:25:18.404 bs=4096 00:25:18.404 iodepth=1 00:25:18.404 norandommap=0 00:25:18.404 numjobs=1 00:25:18.404 00:25:18.404 verify_dump=1 00:25:18.404 verify_backlog=512 00:25:18.404 verify_state_save=0 00:25:18.404 do_verify=1 00:25:18.404 verify=crc32c-intel 00:25:18.404 [job0] 00:25:18.404 filename=/dev/nvme0n1 00:25:18.404 Could not set queue depth (nvme0n1) 00:25:18.404 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:18.404 fio-3.35 00:25:18.404 Starting 1 thread 00:25:19.335 00:25:19.336 job0: (groupid=0, jobs=1): err= 0: pid=69803: Thu Dec 5 11:08:43 2024 00:25:19.336 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:25:19.336 slat (usec): min=9, max=136, avg=13.51, stdev= 4.56 00:25:19.336 clat (usec): min=110, max=839, avg=136.08, stdev=22.80 00:25:19.336 lat (usec): min=120, max=853, avg=149.58, stdev=23.43 00:25:19.336 clat percentiles (usec): 00:25:19.336 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 124], 00:25:19.336 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:25:19.336 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 159], 95.00th=[ 172], 00:25:19.336 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 229], 99.95th=[ 734], 00:25:19.336 | 99.99th=[ 840] 00:25:19.336 write: IOPS=3992, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec); 0 zone resets 00:25:19.336 slat (nsec): min=14362, max=99467, avg=18210.54, stdev=4063.52 00:25:19.336 clat (usec): min=76, max=296, avg=95.63, stdev=12.55 00:25:19.336 lat (usec): min=91, max=333, avg=113.84, stdev=13.35 00:25:19.336 clat percentiles (usec): 00:25:19.336 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:25:19.336 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:25:19.336 | 70.00th=[ 98], 80.00th=[ 102], 90.00th=[ 112], 95.00th=[ 121], 00:25:19.336 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 204], 99.95th=[ 235], 00:25:19.336 | 99.99th=[ 297] 00:25:19.336 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:25:19.336 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:25:19.336 lat (usec) : 100=40.37%, 250=59.58%, 500=0.03%, 750=0.01%, 1000=0.01% 00:25:19.336 cpu : usr=2.50%, sys=9.00%, ctx=7581, majf=0, minf=5 00:25:19.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:19.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.336 issued rwts: total=3584,3996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:19.336 00:25:19.336 Run status group 0 (all jobs): 00:25:19.336 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:25:19.336 WRITE: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=15.6MiB (16.4MB), run=1001-1001msec 00:25:19.336 00:25:19.336 Disk stats (read/write): 00:25:19.336 nvme0n1: ios=3251/3584, merge=0/0, ticks=462/374, in_queue=836, util=91.57% 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:19.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:19.336 rmmod nvme_tcp 00:25:19.336 rmmod nvme_fabrics 00:25:19.336 rmmod nvme_keyring 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 69707 ']' 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 69707 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69707 ']' 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69707 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.336 11:08:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69707 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.594 killing process with pid 69707 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69707' 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69707 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69707 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:19.594 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:19.852 00:25:19.852 real 0m5.581s 00:25:19.852 user 0m16.968s 00:25:19.852 sys 0m1.831s 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:19.852 ************************************ 00:25:19.852 END TEST nvmf_nmic 00:25:19.852 ************************************ 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:19.852 ************************************ 00:25:19.852 START TEST nvmf_fio_target 00:25:19.852 ************************************ 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:25:19.852 * Looking for test storage... 00:25:19.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:19.852 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:20.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.112 --rc genhtml_branch_coverage=1 00:25:20.112 --rc genhtml_function_coverage=1 00:25:20.112 --rc genhtml_legend=1 00:25:20.112 --rc geninfo_all_blocks=1 00:25:20.112 --rc geninfo_unexecuted_blocks=1 00:25:20.112 00:25:20.112 ' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:20.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.112 --rc genhtml_branch_coverage=1 00:25:20.112 --rc genhtml_function_coverage=1 00:25:20.112 --rc genhtml_legend=1 00:25:20.112 --rc geninfo_all_blocks=1 00:25:20.112 --rc geninfo_unexecuted_blocks=1 00:25:20.112 00:25:20.112 ' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:20.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.112 --rc genhtml_branch_coverage=1 00:25:20.112 --rc genhtml_function_coverage=1 00:25:20.112 --rc genhtml_legend=1 00:25:20.112 --rc geninfo_all_blocks=1 00:25:20.112 --rc geninfo_unexecuted_blocks=1 00:25:20.112 00:25:20.112 ' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:20.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.112 --rc genhtml_branch_coverage=1 00:25:20.112 --rc genhtml_function_coverage=1 00:25:20.112 --rc genhtml_legend=1 00:25:20.112 --rc geninfo_all_blocks=1 00:25:20.112 --rc geninfo_unexecuted_blocks=1 00:25:20.112 00:25:20.112 ' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.112 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:20.113 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@223 -- # create_target_ns 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target0 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:20.113 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:20.114 10.0.0.1 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:20.114 10.0.0.2 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:20.114 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target1 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772163 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:20.377 10.0.0.3 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772164 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:20.377 10.0.0.4 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:20.377 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:20.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:25:20.378 00:25:20.378 --- 10.0.0.1 ping statistics --- 00:25:20.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.378 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:20.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:25:20.378 00:25:20.378 --- 10.0.0.2 ping statistics --- 00:25:20.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.378 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:20.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:20.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:25:20.378 00:25:20.378 --- 10.0.0.3 ping statistics --- 00:25:20.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.378 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:20.378 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:20.378 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:20.379 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:20.379 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.130 ms 00:25:20.379 00:25:20.379 --- 10.0.0.4 ping statistics --- 00:25:20.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.379 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # return 0 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:20.379 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:20.660 ' 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.660 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=70038 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 70038 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 70038 ']' 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.661 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.661 [2024-12-05 11:08:45.166259] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:20.661 [2024-12-05 11:08:45.166354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.919 [2024-12-05 11:08:45.324421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.919 [2024-12-05 11:08:45.421753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.919 [2024-12-05 11:08:45.421848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.919 [2024-12-05 11:08:45.421875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.919 [2024-12-05 11:08:45.421899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.919 [2024-12-05 11:08:45.421919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.919 [2024-12-05 11:08:45.423547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.919 [2024-12-05 11:08:45.423666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.919 [2024-12-05 11:08:45.423764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.919 [2024-12-05 11:08:45.423768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.177 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.177 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:25:21.177 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:21.177 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.177 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.177 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.177 11:08:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:21.434 [2024-12-05 11:08:46.055566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.691 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:21.948 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:25:21.948 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:22.206 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:25:22.206 11:08:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:22.772 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:25:22.772 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:23.030 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:25:23.030 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:25:23.289 11:08:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:23.855 11:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:25:23.855 11:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:24.113 11:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:25:24.113 11:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:24.372 11:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:25:24.372 11:08:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:25:24.629 11:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:24.888 11:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:24.888 11:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.454 11:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:25.454 11:08:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:25.711 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.999 [2024-12-05 11:08:50.491661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.000 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:25:26.257 11:08:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:25:26.514 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:26.772 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:25:26.772 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.772 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.772 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:25:26.772 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:25:26.773 11:08:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.673 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.673 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.673 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:28.673 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:25:28.673 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.673 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:25:28.673 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:28.673 [global] 00:25:28.673 thread=1 00:25:28.673 invalidate=1 00:25:28.673 rw=write 00:25:28.673 time_based=1 00:25:28.673 runtime=1 00:25:28.673 ioengine=libaio 00:25:28.673 direct=1 00:25:28.673 bs=4096 00:25:28.673 iodepth=1 00:25:28.673 norandommap=0 00:25:28.673 numjobs=1 00:25:28.673 00:25:28.673 verify_dump=1 00:25:28.673 verify_backlog=512 00:25:28.673 verify_state_save=0 00:25:28.673 do_verify=1 00:25:28.673 verify=crc32c-intel 00:25:28.673 [job0] 00:25:28.673 filename=/dev/nvme0n1 00:25:28.673 [job1] 00:25:28.673 filename=/dev/nvme0n2 00:25:28.673 [job2] 00:25:28.673 filename=/dev/nvme0n3 00:25:28.673 [job3] 00:25:28.673 filename=/dev/nvme0n4 00:25:28.931 Could not set queue depth (nvme0n1) 00:25:28.931 Could not set queue depth (nvme0n2) 00:25:28.931 Could not set queue depth (nvme0n3) 00:25:28.931 Could not set queue depth (nvme0n4) 00:25:28.931 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:28.931 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:28.931 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:28.931 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:28.931 fio-3.35 00:25:28.931 Starting 4 threads 00:25:30.430 00:25:30.430 job0: (groupid=0, jobs=1): err= 0: pid=70336: Thu Dec 5 11:08:54 2024 00:25:30.430 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:30.430 slat (nsec): min=8828, max=45775, avg=14721.55, stdev=4084.65 00:25:30.430 clat (usec): min=169, max=565, avg=288.11, stdev=33.30 00:25:30.430 lat (usec): min=178, max=587, avg=302.83, stdev=34.28 00:25:30.430 clat percentiles (usec): 00:25:30.430 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:25:30.430 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:25:30.430 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 371], 00:25:30.430 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 570], 00:25:30.430 | 99.99th=[ 570] 00:25:30.430 write: IOPS=2018, BW=8076KiB/s (8270kB/s)(8084KiB/1001msec); 0 zone resets 00:25:30.430 slat (usec): min=9, max=123, avg=21.99, stdev= 7.08 00:25:30.430 clat (usec): min=105, max=1202, avg=239.69, stdev=49.75 00:25:30.430 lat (usec): min=120, max=1230, avg=261.68, stdev=51.25 00:25:30.430 clat percentiles (usec): 00:25:30.430 | 1.00th=[ 133], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 204], 00:25:30.430 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 237], 00:25:30.430 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 322], 00:25:30.430 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 429], 99.95th=[ 437], 00:25:30.430 | 99.99th=[ 1205] 00:25:30.430 bw ( KiB/s): min= 8192, max= 8192, per=22.38%, avg=8192.00, stdev= 0.00, samples=1 00:25:30.430 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:30.430 lat (usec) : 250=38.60%, 500=61.34%, 750=0.03% 00:25:30.430 lat (msec) : 2=0.03% 00:25:30.430 cpu : usr=1.20%, sys=5.80%, ctx=3557, majf=0, minf=11 00:25:30.430 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.430 issued rwts: total=1536,2021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.430 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:30.430 job1: (groupid=0, jobs=1): err= 0: pid=70337: Thu Dec 5 11:08:54 2024 00:25:30.430 read: IOPS=2442, BW=9770KiB/s (10.0MB/s)(9780KiB/1001msec) 00:25:30.430 slat (nsec): min=10048, max=48755, avg=18702.15, stdev=4489.76 00:25:30.430 clat (usec): min=143, max=688, avg=194.56, stdev=30.91 00:25:30.430 lat (usec): min=156, max=707, avg=213.26, stdev=32.30 00:25:30.430 clat percentiles (usec): 00:25:30.430 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:25:30.430 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 194], 00:25:30.430 | 70.00th=[ 202], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 251], 00:25:30.430 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 416], 99.95th=[ 424], 00:25:30.430 | 99.99th=[ 693] 00:25:30.430 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:25:30.430 slat (usec): min=14, max=109, avg=26.58, stdev= 6.11 00:25:30.430 clat (usec): min=103, max=3586, avg=156.26, stdev=78.21 00:25:30.430 lat (usec): min=124, max=3627, avg=182.84, stdev=79.27 00:25:30.430 clat percentiles (usec): 00:25:30.430 | 1.00th=[ 112], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 133], 00:25:30.431 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 157], 00:25:30.431 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 200], 00:25:30.431 | 99.00th=[ 225], 99.50th=[ 318], 99.90th=[ 578], 99.95th=[ 1303], 00:25:30.431 | 99.99th=[ 3589] 00:25:30.431 bw ( KiB/s): min=12288, max=12288, per=33.56%, avg=12288.00, stdev= 0.00, samples=1 00:25:30.431 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:30.431 lat (usec) : 250=96.82%, 500=3.10%, 750=0.04% 00:25:30.431 lat (msec) : 2=0.02%, 4=0.02% 00:25:30.431 cpu : usr=2.30%, sys=9.20%, ctx=5016, majf=0, minf=9 00:25:30.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.431 issued rwts: total=2445,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:30.431 job2: (groupid=0, jobs=1): err= 0: pid=70338: Thu Dec 5 11:08:54 2024 00:25:30.431 read: IOPS=2332, BW=9331KiB/s (9555kB/s)(9340KiB/1001msec) 00:25:30.431 slat (nsec): min=10190, max=52255, avg=19064.95, stdev=4496.52 00:25:30.431 clat (usec): min=149, max=735, avg=208.05, stdev=37.22 00:25:30.431 lat (usec): min=165, max=749, avg=227.12, stdev=38.75 00:25:30.431 clat percentiles (usec): 00:25:30.431 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:25:30.431 | 30.00th=[ 184], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 212], 00:25:30.431 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 251], 95.00th=[ 277], 00:25:30.431 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 383], 99.95th=[ 396], 00:25:30.431 | 99.99th=[ 734] 00:25:30.431 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:25:30.431 slat (usec): min=15, max=108, avg=25.81, stdev= 5.11 00:25:30.431 clat (usec): min=106, max=704, avg=153.56, stdev=32.04 00:25:30.431 lat (usec): min=130, max=730, avg=179.37, stdev=33.55 00:25:30.431 clat percentiles (usec): 00:25:30.431 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 131], 00:25:30.431 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 155], 00:25:30.431 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 198], 00:25:30.431 | 99.00th=[ 281], 99.50th=[ 302], 99.90th=[ 429], 99.95th=[ 445], 00:25:30.431 | 99.99th=[ 709] 00:25:30.431 bw ( KiB/s): min=12288, max=12288, per=33.56%, avg=12288.00, stdev= 0.00, samples=1 00:25:30.431 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:30.431 lat (usec) : 250=93.93%, 500=6.03%, 750=0.04% 00:25:30.431 cpu : usr=1.60%, sys=9.40%, ctx=4895, majf=0, minf=13 00:25:30.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.431 issued rwts: total=2335,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:30.431 job3: (groupid=0, jobs=1): err= 0: pid=70339: Thu Dec 5 11:08:54 2024 00:25:30.431 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:30.431 slat (nsec): min=8362, max=39054, avg=14657.78, stdev=4157.22 00:25:30.431 clat (usec): min=232, max=609, avg=288.17, stdev=33.43 00:25:30.431 lat (usec): min=246, max=629, avg=302.83, stdev=34.26 00:25:30.431 clat percentiles (usec): 00:25:30.431 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:25:30.431 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:25:30.431 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 371], 00:25:30.431 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 461], 99.95th=[ 611], 00:25:30.431 | 99.99th=[ 611] 00:25:30.431 write: IOPS=2018, BW=8076KiB/s (8270kB/s)(8084KiB/1001msec); 0 zone resets 00:25:30.431 slat (usec): min=8, max=109, avg=22.25, stdev= 7.22 00:25:30.431 clat (usec): min=123, max=1319, avg=239.41, stdev=51.89 00:25:30.431 lat (usec): min=139, max=1347, avg=261.67, stdev=53.25 00:25:30.431 clat percentiles (usec): 00:25:30.431 | 1.00th=[ 139], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 204], 00:25:30.431 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:25:30.431 | 70.00th=[ 258], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 322], 00:25:30.431 | 99.00th=[ 363], 99.50th=[ 408], 99.90th=[ 453], 99.95th=[ 498], 00:25:30.431 | 99.99th=[ 1319] 00:25:30.431 bw ( KiB/s): min= 8192, max= 8192, per=22.38%, avg=8192.00, stdev= 0.00, samples=1 00:25:30.431 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:30.431 lat (usec) : 250=39.19%, 500=60.75%, 750=0.03% 00:25:30.431 lat (msec) : 2=0.03% 00:25:30.431 cpu : usr=1.50%, sys=5.60%, ctx=3558, majf=0, minf=13 00:25:30.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.431 issued rwts: total=1536,2021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:30.431 00:25:30.431 Run status group 0 (all jobs): 00:25:30.431 READ: bw=30.6MiB/s (32.1MB/s), 6138KiB/s-9770KiB/s (6285kB/s-10.0MB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:25:30.431 WRITE: bw=35.8MiB/s (37.5MB/s), 8076KiB/s-9.99MiB/s (8270kB/s-10.5MB/s), io=35.8MiB (37.5MB), run=1001-1001msec 00:25:30.431 00:25:30.431 Disk stats (read/write): 00:25:30.431 nvme0n1: ios=1579/1536, merge=0/0, ticks=473/348, in_queue=821, util=87.17% 00:25:30.431 nvme0n2: ios=2074/2322, merge=0/0, ticks=425/393, in_queue=818, util=87.08% 00:25:30.431 nvme0n3: ios=1990/2048, merge=0/0, ticks=432/337, in_queue=769, util=88.78% 00:25:30.431 nvme0n4: ios=1528/1536, merge=0/0, ticks=439/351, in_queue=790, util=89.52% 00:25:30.431 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:25:30.431 [global] 00:25:30.431 thread=1 00:25:30.431 invalidate=1 00:25:30.431 rw=randwrite 00:25:30.431 time_based=1 00:25:30.431 runtime=1 00:25:30.431 ioengine=libaio 00:25:30.431 direct=1 00:25:30.431 bs=4096 00:25:30.431 iodepth=1 00:25:30.431 norandommap=0 00:25:30.431 numjobs=1 00:25:30.431 00:25:30.431 verify_dump=1 00:25:30.431 verify_backlog=512 00:25:30.431 verify_state_save=0 00:25:30.431 do_verify=1 00:25:30.431 verify=crc32c-intel 00:25:30.431 [job0] 00:25:30.431 filename=/dev/nvme0n1 00:25:30.431 [job1] 00:25:30.431 filename=/dev/nvme0n2 00:25:30.431 [job2] 00:25:30.431 filename=/dev/nvme0n3 00:25:30.431 [job3] 00:25:30.431 filename=/dev/nvme0n4 00:25:30.431 Could not set queue depth (nvme0n1) 00:25:30.431 Could not set queue depth (nvme0n2) 00:25:30.431 Could not set queue depth (nvme0n3) 00:25:30.431 Could not set queue depth (nvme0n4) 00:25:30.431 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:30.431 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:30.431 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:30.431 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:30.431 fio-3.35 00:25:30.431 Starting 4 threads 00:25:31.806 00:25:31.806 job0: (groupid=0, jobs=1): err= 0: pid=70394: Thu Dec 5 11:08:56 2024 00:25:31.806 read: IOPS=1521, BW=6086KiB/s (6232kB/s)(6092KiB/1001msec) 00:25:31.806 slat (nsec): min=10789, max=68220, avg=22198.90, stdev=5612.88 00:25:31.806 clat (usec): min=172, max=736, avg=332.28, stdev=40.08 00:25:31.806 lat (usec): min=191, max=779, avg=354.48, stdev=41.12 00:25:31.806 clat percentiles (usec): 00:25:31.806 | 1.00th=[ 223], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:25:31.806 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:25:31.806 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 392], 00:25:31.806 | 99.00th=[ 429], 99.50th=[ 465], 99.90th=[ 734], 99.95th=[ 734], 00:25:31.806 | 99.99th=[ 734] 00:25:31.806 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:31.806 slat (usec): min=15, max=106, avg=27.43, stdev= 6.74 00:25:31.806 clat (usec): min=123, max=1249, avg=266.84, stdev=57.92 00:25:31.806 lat (usec): min=147, max=1290, avg=294.27, stdev=57.67 00:25:31.806 clat percentiles (usec): 00:25:31.806 | 1.00th=[ 143], 5.00th=[ 212], 10.00th=[ 227], 20.00th=[ 239], 00:25:31.806 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:25:31.806 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 330], 00:25:31.806 | 99.00th=[ 375], 99.50th=[ 652], 99.90th=[ 857], 99.95th=[ 1254], 00:25:31.806 | 99.99th=[ 1254] 00:25:31.806 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:25:31.806 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:31.806 lat (usec) : 250=17.29%, 500=82.12%, 750=0.46%, 1000=0.10% 00:25:31.806 lat (msec) : 2=0.03% 00:25:31.806 cpu : usr=2.10%, sys=5.80%, ctx=3066, majf=0, minf=7 00:25:31.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.806 issued rwts: total=1523,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:31.806 job1: (groupid=0, jobs=1): err= 0: pid=70395: Thu Dec 5 11:08:56 2024 00:25:31.806 read: IOPS=2167, BW=8671KiB/s (8879kB/s)(8680KiB/1001msec) 00:25:31.806 slat (nsec): min=9559, max=35117, avg=11531.27, stdev=2830.81 00:25:31.806 clat (usec): min=190, max=324, avg=226.74, stdev=13.52 00:25:31.806 lat (usec): min=200, max=334, avg=238.27, stdev=14.03 00:25:31.806 clat percentiles (usec): 00:25:31.806 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:25:31.806 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:25:31.806 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 251], 00:25:31.806 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 281], 00:25:31.806 | 99.99th=[ 326] 00:25:31.806 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:25:31.806 slat (usec): min=14, max=108, avg=16.79, stdev= 4.31 00:25:31.806 clat (usec): min=139, max=1799, avg=169.48, stdev=47.50 00:25:31.806 lat (usec): min=156, max=1815, avg=186.27, stdev=47.95 00:25:31.806 clat percentiles (usec): 00:25:31.806 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:25:31.806 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:25:31.806 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 192], 00:25:31.806 | 99.00th=[ 204], 99.50th=[ 219], 99.90th=[ 1004], 99.95th=[ 1090], 00:25:31.806 | 99.99th=[ 1795] 00:25:31.806 bw ( KiB/s): min=10768, max=10768, per=32.89%, avg=10768.00, stdev= 0.00, samples=1 00:25:31.806 iops : min= 2692, max= 2692, avg=2692.00, stdev= 0.00, samples=1 00:25:31.806 lat (usec) : 250=97.32%, 500=2.54%, 750=0.06%, 1000=0.02% 00:25:31.806 lat (msec) : 2=0.06% 00:25:31.806 cpu : usr=1.30%, sys=5.50%, ctx=4730, majf=0, minf=11 00:25:31.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.806 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.806 issued rwts: total=2170,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:31.806 job2: (groupid=0, jobs=1): err= 0: pid=70396: Thu Dec 5 11:08:56 2024 00:25:31.806 read: IOPS=2533, BW=9.90MiB/s (10.4MB/s)(9.91MiB/1001msec) 00:25:31.806 slat (nsec): min=15138, max=48227, avg=22968.26, stdev=2983.88 00:25:31.806 clat (usec): min=152, max=821, avg=187.99, stdev=26.25 00:25:31.806 lat (usec): min=175, max=844, avg=210.96, stdev=26.61 00:25:31.806 clat percentiles (usec): 00:25:31.806 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:25:31.806 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:25:31.806 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 229], 00:25:31.806 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 433], 99.95th=[ 502], 00:25:31.806 | 99.99th=[ 824] 00:25:31.806 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:25:31.807 slat (nsec): min=22722, max=84763, avg=31487.74, stdev=3921.28 00:25:31.807 clat (usec): min=111, max=364, avg=145.12, stdev=18.04 00:25:31.807 lat (usec): min=135, max=400, avg=176.61, stdev=18.86 00:25:31.807 clat percentiles (usec): 00:25:31.807 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 133], 00:25:31.807 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:25:31.807 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 178], 00:25:31.807 | 99.00th=[ 196], 99.50th=[ 219], 99.90th=[ 314], 99.95th=[ 322], 00:25:31.807 | 99.99th=[ 363] 00:25:31.807 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:25:31.807 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:31.807 lat (usec) : 250=99.35%, 500=0.61%, 750=0.02%, 1000=0.02% 00:25:31.807 cpu : usr=3.20%, sys=11.00%, ctx=5096, majf=0, minf=13 00:25:31.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.807 issued rwts: total=2536,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:31.807 job3: (groupid=0, jobs=1): err= 0: pid=70397: Thu Dec 5 11:08:56 2024 00:25:31.807 read: IOPS=1457, BW=5830KiB/s (5970kB/s)(5836KiB/1001msec) 00:25:31.807 slat (nsec): min=11137, max=60378, avg=19980.34, stdev=5064.61 00:25:31.807 clat (usec): min=196, max=2635, avg=345.32, stdev=76.06 00:25:31.807 lat (usec): min=217, max=2652, avg=365.30, stdev=75.93 00:25:31.807 clat percentiles (usec): 00:25:31.807 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:25:31.807 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 343], 00:25:31.807 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 412], 00:25:31.807 | 99.00th=[ 478], 99.50th=[ 510], 99.90th=[ 1029], 99.95th=[ 2638], 00:25:31.807 | 99.99th=[ 2638] 00:25:31.807 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:31.807 slat (usec): min=16, max=134, avg=33.57, stdev=10.12 00:25:31.807 clat (usec): min=155, max=801, avg=265.61, stdev=50.81 00:25:31.807 lat (usec): min=188, max=836, avg=299.18, stdev=50.57 00:25:31.807 clat percentiles (usec): 00:25:31.807 | 1.00th=[ 184], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 235], 00:25:31.807 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:25:31.807 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 326], 00:25:31.807 | 99.00th=[ 437], 99.50th=[ 627], 99.90th=[ 758], 99.95th=[ 799], 00:25:31.807 | 99.99th=[ 799] 00:25:31.807 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:25:31.807 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:31.807 lat (usec) : 250=18.90%, 500=80.30%, 750=0.63%, 1000=0.07% 00:25:31.807 lat (msec) : 2=0.07%, 4=0.03% 00:25:31.807 cpu : usr=1.20%, sys=6.80%, ctx=2996, majf=0, minf=15 00:25:31.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:31.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.807 issued rwts: total=1459,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:31.807 00:25:31.807 Run status group 0 (all jobs): 00:25:31.807 READ: bw=30.0MiB/s (31.5MB/s), 5830KiB/s-9.90MiB/s (5970kB/s-10.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:25:31.807 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:25:31.807 00:25:31.807 Disk stats (read/write): 00:25:31.807 nvme0n1: ios=1202/1536, merge=0/0, ticks=427/423, in_queue=850, util=88.18% 00:25:31.807 nvme0n2: ios=1976/2048, merge=0/0, ticks=478/364, in_queue=842, util=87.44% 00:25:31.807 nvme0n3: ios=2048/2276, merge=0/0, ticks=400/363, in_queue=763, util=88.89% 00:25:31.807 nvme0n4: ios=1089/1536, merge=0/0, ticks=370/418, in_queue=788, util=89.52% 00:25:31.807 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:25:31.807 [global] 00:25:31.807 thread=1 00:25:31.807 invalidate=1 00:25:31.807 rw=write 00:25:31.807 time_based=1 00:25:31.807 runtime=1 00:25:31.807 ioengine=libaio 00:25:31.807 direct=1 00:25:31.807 bs=4096 00:25:31.807 iodepth=128 00:25:31.807 norandommap=0 00:25:31.807 numjobs=1 00:25:31.807 00:25:31.807 verify_dump=1 00:25:31.807 verify_backlog=512 00:25:31.807 verify_state_save=0 00:25:31.807 do_verify=1 00:25:31.807 verify=crc32c-intel 00:25:31.807 [job0] 00:25:31.807 filename=/dev/nvme0n1 00:25:31.807 [job1] 00:25:31.807 filename=/dev/nvme0n2 00:25:31.807 [job2] 00:25:31.807 filename=/dev/nvme0n3 00:25:31.807 [job3] 00:25:31.807 filename=/dev/nvme0n4 00:25:31.807 Could not set queue depth (nvme0n1) 00:25:31.807 Could not set queue depth (nvme0n2) 00:25:31.807 Could not set queue depth (nvme0n3) 00:25:31.807 Could not set queue depth (nvme0n4) 00:25:31.807 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:31.807 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:31.807 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:31.807 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:31.807 fio-3.35 00:25:31.807 Starting 4 threads 00:25:33.183 00:25:33.183 job0: (groupid=0, jobs=1): err= 0: pid=70455: Thu Dec 5 11:08:57 2024 00:25:33.183 read: IOPS=2402, BW=9612KiB/s (9843kB/s)(9708KiB/1010msec) 00:25:33.183 slat (usec): min=3, max=10483, avg=207.92, stdev=870.54 00:25:33.183 clat (usec): min=6360, max=37835, avg=25363.85, stdev=3199.35 00:25:33.183 lat (usec): min=11168, max=38282, avg=25571.77, stdev=3278.94 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[11600], 5.00th=[21103], 10.00th=[21890], 20.00th=[24249], 00:25:33.183 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:25:33.183 | 70.00th=[26084], 80.00th=[26870], 90.00th=[28181], 95.00th=[30540], 00:25:33.183 | 99.00th=[33162], 99.50th=[34341], 99.90th=[35390], 99.95th=[37487], 00:25:33.183 | 99.99th=[38011] 00:25:33.183 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:25:33.183 slat (usec): min=5, max=7515, avg=186.98, stdev=726.61 00:25:33.183 clat (usec): min=14143, max=37908, avg=25596.31, stdev=2678.80 00:25:33.183 lat (usec): min=15472, max=40199, avg=25783.29, stdev=2744.97 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[17433], 5.00th=[21890], 10.00th=[22676], 20.00th=[23462], 00:25:33.183 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25822], 60.00th=[26346], 00:25:33.183 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28705], 95.00th=[30016], 00:25:33.183 | 99.00th=[32375], 99.50th=[32637], 99.90th=[33424], 99.95th=[34341], 00:25:33.183 | 99.99th=[38011] 00:25:33.183 bw ( KiB/s): min= 9408, max=11094, per=16.97%, avg=10251.00, stdev=1192.18, samples=2 00:25:33.183 iops : min= 2352, max= 2773, avg=2562.50, stdev=297.69, samples=2 00:25:33.183 lat (msec) : 10=0.02%, 20=2.97%, 50=97.01% 00:25:33.183 cpu : usr=1.98%, sys=7.23%, ctx=755, majf=0, minf=9 00:25:33.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:33.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:33.183 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:33.183 job1: (groupid=0, jobs=1): err= 0: pid=70456: Thu Dec 5 11:08:57 2024 00:25:33.183 read: IOPS=4870, BW=19.0MiB/s (19.9MB/s)(19.2MiB/1011msec) 00:25:33.183 slat (usec): min=5, max=6968, avg=103.47, stdev=519.11 00:25:33.183 clat (usec): min=6810, max=34452, avg=12958.23, stdev=4112.59 00:25:33.183 lat (usec): min=7543, max=38782, avg=13061.70, stdev=4157.05 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11207], 00:25:33.183 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:25:33.183 | 70.00th=[12518], 80.00th=[13435], 90.00th=[15139], 95.00th=[24511], 00:25:33.183 | 99.00th=[31851], 99.50th=[32900], 99.90th=[34341], 99.95th=[34341], 00:25:33.183 | 99.99th=[34341] 00:25:33.183 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec); 0 zone resets 00:25:33.183 slat (usec): min=7, max=7151, avg=88.50, stdev=366.74 00:25:33.183 clat (usec): min=7161, max=37312, avg=12508.59, stdev=3450.12 00:25:33.183 lat (usec): min=7183, max=37328, avg=12597.09, stdev=3480.84 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[ 7767], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11076], 00:25:33.183 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:25:33.183 | 70.00th=[12256], 80.00th=[12518], 90.00th=[14091], 95.00th=[16712], 00:25:33.183 | 99.00th=[29754], 99.50th=[32375], 99.90th=[33817], 99.95th=[34866], 00:25:33.183 | 99.99th=[37487] 00:25:33.183 bw ( KiB/s): min=20048, max=20912, per=33.91%, avg=20480.00, stdev=610.94, samples=2 00:25:33.183 iops : min= 5012, max= 5228, avg=5120.00, stdev=152.74, samples=2 00:25:33.183 lat (msec) : 10=8.60%, 20=86.72%, 50=4.68% 00:25:33.183 cpu : usr=3.17%, sys=13.37%, ctx=751, majf=0, minf=10 00:25:33.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:33.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:33.183 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:33.183 job2: (groupid=0, jobs=1): err= 0: pid=70463: Thu Dec 5 11:08:57 2024 00:25:33.183 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:25:33.183 slat (usec): min=6, max=4753, avg=105.82, stdev=485.60 00:25:33.183 clat (usec): min=10364, max=18374, avg=13967.39, stdev=1108.78 00:25:33.183 lat (usec): min=10422, max=18383, avg=14073.21, stdev=1046.21 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[10814], 5.00th=[11731], 10.00th=[12387], 20.00th=[13304], 00:25:33.183 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:25:33.183 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:25:33.183 | 99.00th=[16712], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:25:33.183 | 99.99th=[18482] 00:25:33.183 write: IOPS=4694, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec); 0 zone resets 00:25:33.183 slat (usec): min=9, max=4819, avg=101.14, stdev=460.80 00:25:33.183 clat (usec): min=299, max=16270, avg=13197.37, stdev=1670.91 00:25:33.183 lat (usec): min=2332, max=16289, avg=13298.51, stdev=1640.51 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[ 7046], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:25:33.183 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:25:33.183 | 70.00th=[14091], 80.00th=[14484], 90.00th=[14746], 95.00th=[15139], 00:25:33.183 | 99.00th=[15664], 99.50th=[16057], 99.90th=[16319], 99.95th=[16319], 00:25:33.183 | 99.99th=[16319] 00:25:33.183 bw ( KiB/s): min=19440, max=19440, per=32.19%, avg=19440.00, stdev= 0.00, samples=1 00:25:33.183 iops : min= 4860, max= 4860, avg=4860.00, stdev= 0.00, samples=1 00:25:33.183 lat (usec) : 500=0.01% 00:25:33.183 lat (msec) : 4=0.38%, 10=0.45%, 20=99.16% 00:25:33.183 cpu : usr=4.00%, sys=11.09%, ctx=456, majf=0, minf=13 00:25:33.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:33.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:33.183 issued rwts: total=4608,4704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:33.183 job3: (groupid=0, jobs=1): err= 0: pid=70464: Thu Dec 5 11:08:57 2024 00:25:33.183 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:25:33.183 slat (usec): min=4, max=8798, avg=185.63, stdev=759.13 00:25:33.183 clat (usec): min=10576, max=34320, avg=22948.04, stdev=5291.04 00:25:33.183 lat (usec): min=10591, max=34355, avg=23133.67, stdev=5361.14 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[10683], 5.00th=[12649], 10.00th=[13304], 20.00th=[17433], 00:25:33.183 | 30.00th=[22938], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:25:33.183 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[29492], 00:25:33.183 | 99.00th=[31589], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:25:33.183 | 99.99th=[34341] 00:25:33.183 write: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1003msec); 0 zone resets 00:25:33.183 slat (usec): min=5, max=7996, avg=175.31, stdev=687.73 00:25:33.183 clat (usec): min=321, max=35152, avg=23486.02, stdev=5636.62 00:25:33.183 lat (usec): min=3610, max=35183, avg=23661.33, stdev=5697.29 00:25:33.183 clat percentiles (usec): 00:25:33.183 | 1.00th=[ 4146], 5.00th=[12125], 10.00th=[12780], 20.00th=[20841], 00:25:33.183 | 30.00th=[23200], 40.00th=[24249], 50.00th=[24511], 60.00th=[25822], 00:25:33.183 | 70.00th=[26608], 80.00th=[27132], 90.00th=[28181], 95.00th=[30540], 00:25:33.183 | 99.00th=[32637], 99.50th=[33162], 99.90th=[34866], 99.95th=[34866], 00:25:33.183 | 99.99th=[35390] 00:25:33.183 bw ( KiB/s): min=10536, max=11488, per=18.23%, avg=11012.00, stdev=673.17, samples=2 00:25:33.183 iops : min= 2634, max= 2872, avg=2753.00, stdev=168.29, samples=2 00:25:33.183 lat (usec) : 500=0.02% 00:25:33.183 lat (msec) : 4=0.37%, 10=1.18%, 20=18.75%, 50=79.69% 00:25:33.183 cpu : usr=2.89%, sys=6.59%, ctx=973, majf=0, minf=15 00:25:33.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:33.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:33.183 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:33.183 00:25:33.183 Run status group 0 (all jobs): 00:25:33.183 READ: bw=56.1MiB/s (58.8MB/s), 9612KiB/s-19.0MiB/s (9843kB/s-19.9MB/s), io=56.7MiB (59.5MB), run=1002-1011msec 00:25:33.183 WRITE: bw=59.0MiB/s (61.8MB/s), 9.90MiB/s-19.8MiB/s (10.4MB/s-20.7MB/s), io=59.6MiB (62.5MB), run=1002-1011msec 00:25:33.183 00:25:33.183 Disk stats (read/write): 00:25:33.183 nvme0n1: ios=2098/2182, merge=0/0, ticks=16485/16927, in_queue=33412, util=87.26% 00:25:33.183 nvme0n2: ios=4443/4608, merge=0/0, ticks=25912/24412, in_queue=50324, util=87.53% 00:25:33.183 nvme0n3: ios=3776/4096, merge=0/0, ticks=12606/12124, in_queue=24730, util=88.96% 00:25:33.183 nvme0n4: ios=2048/2208, merge=0/0, ticks=16882/16743, in_queue=33625, util=89.72% 00:25:33.183 11:08:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:25:33.183 [global] 00:25:33.183 thread=1 00:25:33.183 invalidate=1 00:25:33.183 rw=randwrite 00:25:33.183 time_based=1 00:25:33.183 runtime=1 00:25:33.183 ioengine=libaio 00:25:33.183 direct=1 00:25:33.183 bs=4096 00:25:33.183 iodepth=128 00:25:33.183 norandommap=0 00:25:33.183 numjobs=1 00:25:33.183 00:25:33.183 verify_dump=1 00:25:33.183 verify_backlog=512 00:25:33.183 verify_state_save=0 00:25:33.183 do_verify=1 00:25:33.183 verify=crc32c-intel 00:25:33.183 [job0] 00:25:33.183 filename=/dev/nvme0n1 00:25:33.183 [job1] 00:25:33.184 filename=/dev/nvme0n2 00:25:33.184 [job2] 00:25:33.184 filename=/dev/nvme0n3 00:25:33.184 [job3] 00:25:33.184 filename=/dev/nvme0n4 00:25:33.184 Could not set queue depth (nvme0n1) 00:25:33.184 Could not set queue depth (nvme0n2) 00:25:33.184 Could not set queue depth (nvme0n3) 00:25:33.184 Could not set queue depth (nvme0n4) 00:25:33.184 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:33.184 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:33.184 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:33.184 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:33.184 fio-3.35 00:25:33.184 Starting 4 threads 00:25:34.559 00:25:34.559 job0: (groupid=0, jobs=1): err= 0: pid=70517: Thu Dec 5 11:08:58 2024 00:25:34.559 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:25:34.559 slat (usec): min=4, max=36703, avg=261.27, stdev=1814.11 00:25:34.559 clat (usec): min=9834, max=87554, avg=32408.04, stdev=13340.71 00:25:34.559 lat (usec): min=9848, max=91806, avg=32669.31, stdev=13448.36 00:25:34.559 clat percentiles (usec): 00:25:34.559 | 1.00th=[15795], 5.00th=[18220], 10.00th=[19792], 20.00th=[24511], 00:25:34.559 | 30.00th=[25560], 40.00th=[28967], 50.00th=[29754], 60.00th=[31589], 00:25:34.559 | 70.00th=[32637], 80.00th=[36963], 90.00th=[41681], 95.00th=[72877], 00:25:34.559 | 99.00th=[80217], 99.50th=[80217], 99.90th=[83362], 99.95th=[83362], 00:25:34.559 | 99.99th=[87557] 00:25:34.559 write: IOPS=2367, BW=9469KiB/s (9696kB/s)(9516KiB/1005msec); 0 zone resets 00:25:34.559 slat (usec): min=5, max=21569, avg=186.66, stdev=960.77 00:25:34.559 clat (usec): min=3808, max=48669, avg=25614.00, stdev=5268.00 00:25:34.559 lat (usec): min=3842, max=48702, avg=25800.66, stdev=5344.90 00:25:34.559 clat percentiles (usec): 00:25:34.559 | 1.00th=[ 9503], 5.00th=[15401], 10.00th=[19268], 20.00th=[21890], 00:25:34.559 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26608], 60.00th=[27132], 00:25:34.559 | 70.00th=[28181], 80.00th=[28967], 90.00th=[31065], 95.00th=[32113], 00:25:34.559 | 99.00th=[37487], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:25:34.559 | 99.99th=[48497] 00:25:34.559 bw ( KiB/s): min= 7600, max=10374, per=15.94%, avg=8987.00, stdev=1961.51, samples=2 00:25:34.559 iops : min= 1900, max= 2593, avg=2246.50, stdev=490.02, samples=2 00:25:34.559 lat (msec) : 4=0.16%, 10=0.88%, 20=11.86%, 50=84.21%, 100=2.89% 00:25:34.559 cpu : usr=2.29%, sys=6.18%, ctx=337, majf=0, minf=5 00:25:34.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:34.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:34.559 issued rwts: total=2048,2379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:34.559 job1: (groupid=0, jobs=1): err= 0: pid=70518: Thu Dec 5 11:08:58 2024 00:25:34.559 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:25:34.559 slat (usec): min=3, max=23616, avg=133.75, stdev=978.09 00:25:34.559 clat (usec): min=4727, max=48692, avg=16707.09, stdev=8176.03 00:25:34.559 lat (usec): min=4754, max=48716, avg=16840.84, stdev=8258.49 00:25:34.559 clat percentiles (usec): 00:25:34.559 | 1.00th=[ 5407], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10552], 00:25:34.559 | 30.00th=[11469], 40.00th=[12125], 50.00th=[13304], 60.00th=[14877], 00:25:34.559 | 70.00th=[18482], 80.00th=[23462], 90.00th=[31065], 95.00th=[34341], 00:25:34.559 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43779], 99.95th=[45351], 00:25:34.559 | 99.99th=[48497] 00:25:34.559 write: IOPS=4081, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:25:34.559 slat (usec): min=4, max=17340, avg=102.22, stdev=576.70 00:25:34.559 clat (usec): min=3858, max=82334, avg=14391.78, stdev=11268.43 00:25:34.559 lat (usec): min=3869, max=82353, avg=14494.00, stdev=11344.68 00:25:34.559 clat percentiles (usec): 00:25:34.559 | 1.00th=[ 4555], 5.00th=[ 5800], 10.00th=[ 7373], 20.00th=[10159], 00:25:34.559 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:25:34.560 | 70.00th=[12387], 80.00th=[12649], 90.00th=[20579], 95.00th=[33424], 00:25:34.560 | 99.00th=[71828], 99.50th=[73925], 99.90th=[80217], 99.95th=[82314], 00:25:34.560 | 99.99th=[82314] 00:25:34.560 bw ( KiB/s): min=10160, max=22517, per=28.98%, avg=16338.50, stdev=8737.72, samples=2 00:25:34.560 iops : min= 2540, max= 5629, avg=4084.50, stdev=2184.25, samples=2 00:25:34.560 lat (msec) : 4=0.12%, 10=17.48%, 20=63.33%, 50=17.50%, 100=1.56% 00:25:34.560 cpu : usr=3.19%, sys=9.36%, ctx=664, majf=0, minf=13 00:25:34.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:34.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:34.560 issued rwts: total=4096,4102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:34.560 job2: (groupid=0, jobs=1): err= 0: pid=70519: Thu Dec 5 11:08:58 2024 00:25:34.560 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:25:34.560 slat (usec): min=5, max=6244, avg=95.89, stdev=490.70 00:25:34.560 clat (usec): min=6709, max=20020, avg=12361.54, stdev=1953.97 00:25:34.560 lat (usec): min=6725, max=20051, avg=12457.43, stdev=1997.30 00:25:34.560 clat percentiles (usec): 00:25:34.560 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:25:34.560 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:25:34.560 | 70.00th=[13173], 80.00th=[13698], 90.00th=[15008], 95.00th=[16057], 00:25:34.560 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:25:34.560 | 99.99th=[20055] 00:25:34.560 write: IOPS=5255, BW=20.5MiB/s (21.5MB/s)(20.7MiB/1006msec); 0 zone resets 00:25:34.560 slat (usec): min=8, max=7108, avg=88.27, stdev=397.42 00:25:34.560 clat (usec): min=4700, max=19964, avg=12073.57, stdev=2226.24 00:25:34.560 lat (usec): min=5218, max=20003, avg=12161.84, stdev=2257.81 00:25:34.560 clat percentiles (usec): 00:25:34.560 | 1.00th=[ 7177], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:25:34.560 | 30.00th=[10552], 40.00th=[11207], 50.00th=[12649], 60.00th=[13042], 00:25:34.560 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14353], 95.00th=[16188], 00:25:34.560 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:25:34.560 | 99.99th=[20055] 00:25:34.560 bw ( KiB/s): min=20439, max=20800, per=36.57%, avg=20619.50, stdev=255.27, samples=2 00:25:34.560 iops : min= 5109, max= 5200, avg=5154.50, stdev=64.35, samples=2 00:25:34.560 lat (msec) : 10=14.35%, 20=85.63%, 50=0.02% 00:25:34.560 cpu : usr=4.28%, sys=13.03%, ctx=593, majf=0, minf=11 00:25:34.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:34.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:34.560 issued rwts: total=5120,5287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:34.560 job3: (groupid=0, jobs=1): err= 0: pid=70520: Thu Dec 5 11:08:58 2024 00:25:34.560 read: IOPS=2015, BW=8063KiB/s (8257kB/s)(8192KiB/1016msec) 00:25:34.560 slat (usec): min=3, max=20498, avg=213.22, stdev=1359.41 00:25:34.560 clat (usec): min=7240, max=49576, avg=25724.12, stdev=7996.88 00:25:34.560 lat (usec): min=7250, max=49598, avg=25937.34, stdev=8080.67 00:25:34.560 clat percentiles (usec): 00:25:34.560 | 1.00th=[10814], 5.00th=[12911], 10.00th=[13173], 20.00th=[17957], 00:25:34.560 | 30.00th=[22676], 40.00th=[24773], 50.00th=[26084], 60.00th=[27657], 00:25:34.560 | 70.00th=[29230], 80.00th=[30278], 90.00th=[35914], 95.00th=[41157], 00:25:34.560 | 99.00th=[46400], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:25:34.560 | 99.99th=[49546] 00:25:34.560 write: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(9.98MiB/1016msec); 0 zone resets 00:25:34.560 slat (usec): min=5, max=25016, avg=211.28, stdev=1146.02 00:25:34.560 clat (usec): min=5017, max=59925, avg=29790.15, stdev=10154.66 00:25:34.560 lat (usec): min=5048, max=59941, avg=30001.43, stdev=10235.06 00:25:34.560 clat percentiles (usec): 00:25:34.560 | 1.00th=[ 7046], 5.00th=[12780], 10.00th=[21627], 20.00th=[24773], 00:25:34.560 | 30.00th=[26346], 40.00th=[26870], 50.00th=[27657], 60.00th=[28967], 00:25:34.560 | 70.00th=[30802], 80.00th=[34341], 90.00th=[44303], 95.00th=[53740], 00:25:34.560 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:25:34.560 | 99.99th=[60031] 00:25:34.560 bw ( KiB/s): min= 9008, max=10387, per=17.20%, avg=9697.50, stdev=975.10, samples=2 00:25:34.560 iops : min= 2252, max= 2596, avg=2424.00, stdev=243.24, samples=2 00:25:34.560 lat (msec) : 10=1.91%, 20=15.12%, 50=78.97%, 100=4.00% 00:25:34.560 cpu : usr=2.27%, sys=6.70%, ctx=305, majf=0, minf=14 00:25:34.560 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:34.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:34.560 issued rwts: total=2048,2554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:34.560 00:25:34.560 Run status group 0 (all jobs): 00:25:34.560 READ: bw=51.2MiB/s (53.7MB/s), 8063KiB/s-19.9MiB/s (8257kB/s-20.8MB/s), io=52.0MiB (54.5MB), run=1005-1016msec 00:25:34.560 WRITE: bw=55.1MiB/s (57.7MB/s), 9469KiB/s-20.5MiB/s (9696kB/s-21.5MB/s), io=55.9MiB (58.7MB), run=1005-1016msec 00:25:34.560 00:25:34.560 Disk stats (read/write): 00:25:34.560 nvme0n1: ios=1838/2048, merge=0/0, ticks=45664/43749, in_queue=89413, util=87.36% 00:25:34.560 nvme0n2: ios=3633/4015, merge=0/0, ticks=43272/43672, in_queue=86944, util=86.54% 00:25:34.560 nvme0n3: ios=4096/4529, merge=0/0, ticks=25189/24682, in_queue=49871, util=88.30% 00:25:34.560 nvme0n4: ios=1788/2048, merge=0/0, ticks=46974/57502, in_queue=104476, util=89.62% 00:25:34.560 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:25:34.560 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70533 00:25:34.560 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:25:34.560 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:25:34.560 [global] 00:25:34.560 thread=1 00:25:34.560 invalidate=1 00:25:34.560 rw=read 00:25:34.560 time_based=1 00:25:34.560 runtime=10 00:25:34.560 ioengine=libaio 00:25:34.560 direct=1 00:25:34.560 bs=4096 00:25:34.560 iodepth=1 00:25:34.560 norandommap=1 00:25:34.560 numjobs=1 00:25:34.560 00:25:34.560 [job0] 00:25:34.560 filename=/dev/nvme0n1 00:25:34.560 [job1] 00:25:34.560 filename=/dev/nvme0n2 00:25:34.560 [job2] 00:25:34.560 filename=/dev/nvme0n3 00:25:34.560 [job3] 00:25:34.561 filename=/dev/nvme0n4 00:25:34.561 Could not set queue depth (nvme0n1) 00:25:34.561 Could not set queue depth (nvme0n2) 00:25:34.561 Could not set queue depth (nvme0n3) 00:25:34.561 Could not set queue depth (nvme0n4) 00:25:34.561 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:34.561 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:34.561 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:34.561 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:34.561 fio-3.35 00:25:34.561 Starting 4 threads 00:25:37.844 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:25:37.844 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40120320, buflen=4096 00:25:37.844 fio: pid=70576, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:38.102 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:25:38.361 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=58257408, buflen=4096 00:25:38.361 fio: pid=70575, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:38.361 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:38.361 11:09:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:38.928 fio: pid=70573, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:38.928 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52588544, buflen=4096 00:25:38.928 11:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:38.928 11:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:39.186 fio: pid=70574, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:39.186 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=25382912, buflen=4096 00:25:39.443 00:25:39.443 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70573: Thu Dec 5 11:09:03 2024 00:25:39.443 read: IOPS=3172, BW=12.4MiB/s (13.0MB/s)(50.2MiB/4047msec) 00:25:39.443 slat (usec): min=7, max=10498, avg=18.10, stdev=152.29 00:25:39.443 clat (usec): min=5, max=7665, avg=295.60, stdev=135.23 00:25:39.443 lat (usec): min=125, max=11141, avg=313.71, stdev=204.23 00:25:39.443 clat percentiles (usec): 00:25:39.444 | 1.00th=[ 141], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 233], 00:25:39.444 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:25:39.444 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 429], 00:25:39.444 | 99.00th=[ 586], 99.50th=[ 668], 99.90th=[ 1352], 99.95th=[ 3032], 00:25:39.444 | 99.99th=[ 3982] 00:25:39.444 bw ( KiB/s): min=10432, max=15528, per=23.57%, avg=12426.86, stdev=1708.74, samples=7 00:25:39.444 iops : min= 2608, max= 3882, avg=3106.71, stdev=427.18, samples=7 00:25:39.444 lat (usec) : 10=0.01%, 100=0.01%, 250=25.19%, 500=72.20%, 750=2.23% 00:25:39.444 lat (usec) : 1000=0.19% 00:25:39.444 lat (msec) : 2=0.11%, 4=0.05%, 10=0.01% 00:25:39.444 cpu : usr=0.96%, sys=4.75%, ctx=12883, majf=0, minf=1 00:25:39.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 issued rwts: total=12840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.444 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70574: Thu Dec 5 11:09:03 2024 00:25:39.444 read: IOPS=5008, BW=19.6MiB/s (20.5MB/s)(88.2MiB/4509msec) 00:25:39.444 slat (usec): min=8, max=11942, avg=18.51, stdev=146.35 00:25:39.444 clat (usec): min=3, max=25192, avg=179.69, stdev=177.34 00:25:39.444 lat (usec): min=127, max=25202, avg=198.20, stdev=231.31 00:25:39.444 clat percentiles (usec): 00:25:39.444 | 1.00th=[ 127], 5.00th=[ 141], 10.00th=[ 149], 20.00th=[ 155], 00:25:39.444 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:25:39.444 | 70.00th=[ 186], 80.00th=[ 198], 90.00th=[ 219], 95.00th=[ 235], 00:25:39.444 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 594], 99.95th=[ 881], 00:25:39.444 | 99.99th=[ 3949] 00:25:39.444 bw ( KiB/s): min=13784, max=22120, per=37.61%, avg=19830.75, stdev=2652.72, samples=8 00:25:39.444 iops : min= 3446, max= 5530, avg=4957.62, stdev=663.19, samples=8 00:25:39.444 lat (usec) : 4=0.01%, 10=0.04%, 20=0.01%, 50=0.01%, 100=0.02% 00:25:39.444 lat (usec) : 250=97.26%, 500=2.49%, 750=0.11%, 1000=0.02% 00:25:39.444 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01%, 50=0.01% 00:25:39.444 cpu : usr=1.57%, sys=7.03%, ctx=22632, majf=0, minf=2 00:25:39.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 issued rwts: total=22582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.444 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70575: Thu Dec 5 11:09:03 2024 00:25:39.444 read: IOPS=3944, BW=15.4MiB/s (16.2MB/s)(55.6MiB/3606msec) 00:25:39.444 slat (usec): min=7, max=17856, avg=16.35, stdev=170.28 00:25:39.444 clat (usec): min=95, max=4310, avg=235.60, stdev=79.25 00:25:39.444 lat (usec): min=159, max=18198, avg=251.94, stdev=189.24 00:25:39.444 clat percentiles (usec): 00:25:39.444 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 202], 00:25:39.444 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:25:39.444 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 322], 00:25:39.444 | 99.00th=[ 478], 99.50th=[ 537], 99.90th=[ 955], 99.95th=[ 1385], 00:25:39.444 | 99.99th=[ 2868] 00:25:39.444 bw ( KiB/s): min=15528, max=17864, per=31.84%, avg=16788.00, stdev=930.37, samples=6 00:25:39.444 iops : min= 3882, max= 4466, avg=4197.00, stdev=232.59, samples=6 00:25:39.444 lat (usec) : 100=0.01%, 250=78.06%, 500=21.20%, 750=0.59%, 1000=0.04% 00:25:39.444 lat (msec) : 2=0.07%, 4=0.02%, 10=0.01% 00:25:39.444 cpu : usr=1.05%, sys=5.44%, ctx=14229, majf=0, minf=2 00:25:39.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 issued rwts: total=14224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.444 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70576: Thu Dec 5 11:09:03 2024 00:25:39.444 read: IOPS=3121, BW=12.2MiB/s (12.8MB/s)(38.3MiB/3138msec) 00:25:39.444 slat (usec): min=6, max=108, avg=14.74, stdev= 5.16 00:25:39.444 clat (usec): min=148, max=3553, avg=303.93, stdev=91.72 00:25:39.444 lat (usec): min=159, max=3569, avg=318.67, stdev=92.78 00:25:39.444 clat percentiles (usec): 00:25:39.444 | 1.00th=[ 163], 5.00th=[ 184], 10.00th=[ 233], 20.00th=[ 260], 00:25:39.444 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 306], 00:25:39.444 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 412], 00:25:39.444 | 99.00th=[ 553], 99.50th=[ 619], 99.90th=[ 947], 99.95th=[ 1598], 00:25:39.444 | 99.99th=[ 3556] 00:25:39.444 bw ( KiB/s): min=10424, max=15408, per=23.62%, avg=12456.00, stdev=1827.93, samples=6 00:25:39.444 iops : min= 2606, max= 3852, avg=3114.00, stdev=456.98, samples=6 00:25:39.444 lat (usec) : 250=14.18%, 500=83.45%, 750=2.09%, 1000=0.17% 00:25:39.444 lat (msec) : 2=0.07%, 4=0.02% 00:25:39.444 cpu : usr=1.02%, sys=4.30%, ctx=9801, majf=0, minf=1 00:25:39.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.444 issued rwts: total=9796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.444 00:25:39.444 Run status group 0 (all jobs): 00:25:39.444 READ: bw=51.5MiB/s (54.0MB/s), 12.2MiB/s-19.6MiB/s (12.8MB/s-20.5MB/s), io=232MiB (243MB), run=3138-4509msec 00:25:39.444 00:25:39.444 Disk stats (read/write): 00:25:39.444 nvme0n1: ios=12108/0, merge=0/0, ticks=3590/0, in_queue=3590, util=95.62% 00:25:39.444 nvme0n2: ios=21680/0, merge=0/0, ticks=3933/0, in_queue=3933, util=95.99% 00:25:39.444 nvme0n3: ios=13651/0, merge=0/0, ticks=3143/0, in_queue=3143, util=96.25% 00:25:39.444 nvme0n4: ios=9715/0, merge=0/0, ticks=2909/0, in_queue=2909, util=97.01% 00:25:39.444 11:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:39.444 11:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:25:39.703 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:39.703 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:25:40.267 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:40.267 11:09:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:25:40.619 11:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:40.619 11:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:25:41.210 11:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:41.210 11:09:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70533 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:41.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:41.467 nvmf hotplug test: fio failed as expected 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:25:41.467 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.031 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:25:42.031 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:25:42.031 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:25:42.031 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:25:42.031 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:25:42.031 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:42.032 rmmod nvme_tcp 00:25:42.032 rmmod nvme_fabrics 00:25:42.032 rmmod nvme_keyring 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 70038 ']' 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 70038 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 70038 ']' 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 70038 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70038 00:25:42.032 killing process with pid 70038 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70038' 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 70038 00:25:42.032 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 70038 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:42.289 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:42.546 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:25:42.546 00:25:42.546 real 0m22.597s 00:25:42.546 user 1m26.929s 00:25:42.546 sys 0m10.822s 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.546 ************************************ 00:25:42.546 END TEST nvmf_fio_target 00:25:42.546 ************************************ 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.546 11:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:42.546 ************************************ 00:25:42.546 START TEST nvmf_bdevio 00:25:42.546 ************************************ 00:25:42.547 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:42.547 * Looking for test storage... 00:25:42.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:42.547 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:42.547 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:25:42.547 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.806 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:42.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.806 --rc genhtml_branch_coverage=1 00:25:42.806 --rc genhtml_function_coverage=1 00:25:42.807 --rc genhtml_legend=1 00:25:42.807 --rc geninfo_all_blocks=1 00:25:42.807 --rc geninfo_unexecuted_blocks=1 00:25:42.807 00:25:42.807 ' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:42.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.807 --rc genhtml_branch_coverage=1 00:25:42.807 --rc genhtml_function_coverage=1 00:25:42.807 --rc genhtml_legend=1 00:25:42.807 --rc geninfo_all_blocks=1 00:25:42.807 --rc geninfo_unexecuted_blocks=1 00:25:42.807 00:25:42.807 ' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:42.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.807 --rc genhtml_branch_coverage=1 00:25:42.807 --rc genhtml_function_coverage=1 00:25:42.807 --rc genhtml_legend=1 00:25:42.807 --rc geninfo_all_blocks=1 00:25:42.807 --rc geninfo_unexecuted_blocks=1 00:25:42.807 00:25:42.807 ' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:42.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.807 --rc genhtml_branch_coverage=1 00:25:42.807 --rc genhtml_function_coverage=1 00:25:42.807 --rc genhtml_legend=1 00:25:42.807 --rc geninfo_all_blocks=1 00:25:42.807 --rc geninfo_unexecuted_blocks=1 00:25:42.807 00:25:42.807 ' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:42.807 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:42.807 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@223 -- # create_target_ns 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target0 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:42.808 10.0.0.1 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:42.808 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:42.809 10.0.0.2 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:42.809 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:43.069 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target1 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772163 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:43.070 10.0.0.3 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772164 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:43.070 10.0.0.4 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:43.070 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:43.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:25:43.071 00:25:43.071 --- 10.0.0.1 ping statistics --- 00:25:43.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.071 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:43.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:25:43.071 00:25:43.071 --- 10.0.0.2 ping statistics --- 00:25:43.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.071 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:43.071 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:43.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:43.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.142 ms 00:25:43.336 00:25:43.336 --- 10.0.0.3 ping statistics --- 00:25:43.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.336 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:43.336 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:43.336 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:25:43.336 00:25:43.336 --- 10.0.0.4 ping statistics --- 00:25:43.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.336 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # return 0 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:43.336 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:43.337 ' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=70975 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 70975 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70975 ']' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.337 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:25:43.337 [2024-12-05 11:09:07.891891] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:43.337 [2024-12-05 11:09:07.892010] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.594 [2024-12-05 11:09:08.041636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.594 [2024-12-05 11:09:08.102785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.594 [2024-12-05 11:09:08.102858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.594 [2024-12-05 11:09:08.102870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.594 [2024-12-05 11:09:08.102880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.594 [2024-12-05 11:09:08.102889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.594 [2024-12-05 11:09:08.104154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:43.594 [2024-12-05 11:09:08.104248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:43.594 [2024-12-05 11:09:08.104319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:43.594 [2024-12-05 11:09:08.104325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.594 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.594 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:25:43.594 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:43.594 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:43.594 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.852 [2024-12-05 11:09:08.268789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.852 Malloc0 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:43.852 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:43.853 [2024-12-05 11:09:08.338410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:43.853 { 00:25:43.853 "params": { 00:25:43.853 "name": "Nvme$subsystem", 00:25:43.853 "trtype": "$TEST_TRANSPORT", 00:25:43.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.853 "adrfam": "ipv4", 00:25:43.853 "trsvcid": "$NVMF_PORT", 00:25:43.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.853 "hdgst": ${hdgst:-false}, 00:25:43.853 "ddgst": ${ddgst:-false} 00:25:43.853 }, 00:25:43.853 "method": "bdev_nvme_attach_controller" 00:25:43.853 } 00:25:43.853 EOF 00:25:43.853 )") 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:25:43.853 11:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:43.853 "params": { 00:25:43.853 "name": "Nvme1", 00:25:43.853 "trtype": "tcp", 00:25:43.853 "traddr": "10.0.0.2", 00:25:43.853 "adrfam": "ipv4", 00:25:43.853 "trsvcid": "4420", 00:25:43.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:43.853 "hdgst": false, 00:25:43.853 "ddgst": false 00:25:43.853 }, 00:25:43.853 "method": "bdev_nvme_attach_controller" 00:25:43.853 }' 00:25:43.853 [2024-12-05 11:09:08.401628] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:43.853 [2024-12-05 11:09:08.402485] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71011 ] 00:25:44.111 [2024-12-05 11:09:08.591192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:44.111 [2024-12-05 11:09:08.657542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.111 [2024-12-05 11:09:08.657616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.111 [2024-12-05 11:09:08.657619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.369 I/O targets: 00:25:44.369 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:44.369 00:25:44.369 00:25:44.369 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.369 http://cunit.sourceforge.net/ 00:25:44.369 00:25:44.369 00:25:44.369 Suite: bdevio tests on: Nvme1n1 00:25:44.369 Test: blockdev write read block ...passed 00:25:44.369 Test: blockdev write zeroes read block ...passed 00:25:44.369 Test: blockdev write zeroes read no split ...passed 00:25:44.369 Test: blockdev write zeroes read split ...passed 00:25:44.369 Test: blockdev write zeroes read split partial ...passed 00:25:44.369 Test: blockdev reset ...[2024-12-05 11:09:08.948488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:44.369 [2024-12-05 11:09:08.948872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf85f50 (9): Bad file descriptor 00:25:44.369 [2024-12-05 11:09:08.960922] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:25:44.369 passed 00:25:44.369 Test: blockdev write read 8 blocks ...passed 00:25:44.369 Test: blockdev write read size > 128k ...passed 00:25:44.369 Test: blockdev write read invalid size ...passed 00:25:44.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:44.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:44.369 Test: blockdev write read max offset ...passed 00:25:44.627 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:44.627 Test: blockdev writev readv 8 blocks ...passed 00:25:44.627 Test: blockdev writev readv 30 x 1block ...passed 00:25:44.627 Test: blockdev writev readv block ...passed 00:25:44.627 Test: blockdev writev readv size > 128k ...passed 00:25:44.627 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:44.627 Test: blockdev comparev and writev ...[2024-12-05 11:09:09.135010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.135068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.135088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.135100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.135422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.135436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.135453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.135464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.135821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.135836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.135853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.135864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.136414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.136440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.136457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:44.627 [2024-12-05 11:09:09.136468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.627 passed 00:25:44.627 Test: blockdev nvme passthru rw ...passed 00:25:44.627 Test: blockdev nvme passthru vendor specific ...passed 00:25:44.627 Test: blockdev nvme admin passthru ...[2024-12-05 11:09:09.221400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.627 [2024-12-05 11:09:09.221455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.221581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.627 [2024-12-05 11:09:09.221607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.221711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.627 [2024-12-05 11:09:09.221725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.627 [2024-12-05 11:09:09.221830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.627 [2024-12-05 11:09:09.221845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.627 passed 00:25:44.886 Test: blockdev copy ...passed 00:25:44.886 00:25:44.886 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.886 suites 1 1 n/a 0 0 00:25:44.886 tests 23 23 23 0 0 00:25:44.886 asserts 152 152 152 0 n/a 00:25:44.886 00:25:44.886 Elapsed time = 0.901 seconds 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:44.886 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:44.886 rmmod nvme_tcp 00:25:45.145 rmmod nvme_fabrics 00:25:45.145 rmmod nvme_keyring 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 70975 ']' 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 70975 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70975 ']' 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70975 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70975 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:25:45.145 killing process with pid 70975 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70975' 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70975 00:25:45.145 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70975 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:45.403 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:45.403 ************************************ 00:25:45.403 END TEST nvmf_bdevio 00:25:45.403 ************************************ 00:25:45.403 00:25:45.403 real 0m2.944s 00:25:45.403 user 0m8.893s 00:25:45.403 sys 0m1.043s 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.403 11:09:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:45.661 00:25:45.661 real 3m38.117s 00:25:45.661 user 11m9.235s 00:25:45.661 sys 1m16.244s 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:45.661 ************************************ 00:25:45.661 END TEST nvmf_target_core 00:25:45.661 ************************************ 00:25:45.661 11:09:10 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:25:45.661 11:09:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.661 11:09:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.661 11:09:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.661 ************************************ 00:25:45.661 START TEST nvmf_target_extra 00:25:45.661 ************************************ 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:25:45.661 * Looking for test storage... 00:25:45.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:45.661 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.662 --rc genhtml_branch_coverage=1 00:25:45.662 --rc genhtml_function_coverage=1 00:25:45.662 --rc genhtml_legend=1 00:25:45.662 --rc geninfo_all_blocks=1 00:25:45.662 --rc geninfo_unexecuted_blocks=1 00:25:45.662 00:25:45.662 ' 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.662 --rc genhtml_branch_coverage=1 00:25:45.662 --rc genhtml_function_coverage=1 00:25:45.662 --rc genhtml_legend=1 00:25:45.662 --rc geninfo_all_blocks=1 00:25:45.662 --rc geninfo_unexecuted_blocks=1 00:25:45.662 00:25:45.662 ' 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.662 --rc genhtml_branch_coverage=1 00:25:45.662 --rc genhtml_function_coverage=1 00:25:45.662 --rc genhtml_legend=1 00:25:45.662 --rc geninfo_all_blocks=1 00:25:45.662 --rc geninfo_unexecuted_blocks=1 00:25:45.662 00:25:45.662 ' 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:45.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.662 --rc genhtml_branch_coverage=1 00:25:45.662 --rc genhtml_function_coverage=1 00:25:45.662 --rc genhtml_legend=1 00:25:45.662 --rc geninfo_all_blocks=1 00:25:45.662 --rc geninfo_unexecuted_blocks=1 00:25:45.662 00:25:45.662 ' 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.662 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:45.923 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:45.924 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:45.924 ************************************ 00:25:45.924 START TEST nvmf_example 00:25:45.924 ************************************ 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:25:45.924 * Looking for test storage... 00:25:45.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.924 --rc genhtml_branch_coverage=1 00:25:45.924 --rc genhtml_function_coverage=1 00:25:45.924 --rc genhtml_legend=1 00:25:45.924 --rc geninfo_all_blocks=1 00:25:45.924 --rc geninfo_unexecuted_blocks=1 00:25:45.924 00:25:45.924 ' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.924 --rc genhtml_branch_coverage=1 00:25:45.924 --rc genhtml_function_coverage=1 00:25:45.924 --rc genhtml_legend=1 00:25:45.924 --rc geninfo_all_blocks=1 00:25:45.924 --rc geninfo_unexecuted_blocks=1 00:25:45.924 00:25:45.924 ' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.924 --rc genhtml_branch_coverage=1 00:25:45.924 --rc genhtml_function_coverage=1 00:25:45.924 --rc genhtml_legend=1 00:25:45.924 --rc geninfo_all_blocks=1 00:25:45.924 --rc geninfo_unexecuted_blocks=1 00:25:45.924 00:25:45.924 ' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:45.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.924 --rc genhtml_branch_coverage=1 00:25:45.924 --rc genhtml_function_coverage=1 00:25:45.924 --rc genhtml_legend=1 00:25:45.924 --rc geninfo_all_blocks=1 00:25:45.924 --rc geninfo_unexecuted_blocks=1 00:25:45.924 00:25:45.924 ' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.924 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:45.925 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@223 -- # create_target_ns 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # return 0 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:45.925 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up target0 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:46.185 10.0.0.1 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:46.185 10.0.0.2 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:46.185 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up target1 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772163 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:46.186 10.0.0.3 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772164 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:46.186 10.0.0.4 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:46.186 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:46.445 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:46.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:25:46.446 00:25:46.446 --- 10.0.0.1 ping statistics --- 00:25:46.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.446 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target0 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:46.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:25:46.446 00:25:46.446 --- 10.0.0.2 ping statistics --- 00:25:46.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.446 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.446 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:46.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:46.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:25:46.447 00:25:46.447 --- 10.0.0.3 ping statistics --- 00:25:46.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.447 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:46.447 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:46.447 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:25:46.447 00:25:46.447 --- 10.0.0.4 ping statistics --- 00:25:46.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.447 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # return 0 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator0 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.447 11:09:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:46.447 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:25:46.447 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.447 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:46.447 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target0 00:25:46.447 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target0 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target1 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target1 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:46.448 ' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71310 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71310 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 71310 ']' 00:25:46.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:46.448 11:09:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:25:47.823 11:09:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:00.026 Initializing NVMe Controllers 00:26:00.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:00.026 Initialization complete. Launching workers. 00:26:00.026 ======================================================== 00:26:00.026 Latency(us) 00:26:00.026 Device Information : IOPS MiB/s Average min max 00:26:00.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15122.21 59.07 4232.47 696.89 23975.96 00:26:00.027 ======================================================== 00:26:00.027 Total : 15122.21 59.07 4232.47 696.89 23975.96 00:26:00.027 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:00.027 rmmod nvme_tcp 00:26:00.027 rmmod nvme_fabrics 00:26:00.027 rmmod nvme_keyring 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 71310 ']' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 71310 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 71310 ']' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 71310 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71310 00:26:00.027 killing process with pid 71310 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71310' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 71310 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 71310 00:26:00.027 nvmf threads initialize successfully 00:26:00.027 bdev subsystem init successfully 00:26:00.027 created a nvmf target service 00:26:00.027 create targets's poll groups done 00:26:00.027 all subsystems of target started 00:26:00.027 nvmf target is running 00:26:00.027 all subsystems of target stopped 00:26:00.027 destroy targets's poll groups done 00:26:00.027 destroyed the nvmf target service 00:26:00.027 bdev subsystem finish successfully 00:26:00.027 nvmf threads destroy successfully 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@254 -- # local dev 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:00.027 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # continue 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # continue 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@274 -- # iptr 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-save 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-restore 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:26:00.027 00:26:00.027 real 0m12.733s 00:26:00.027 user 0m44.186s 00:26:00.027 sys 0m2.742s 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:26:00.027 ************************************ 00:26:00.027 END TEST nvmf_example 00:26:00.027 ************************************ 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.027 ************************************ 00:26:00.027 START TEST nvmf_filesystem 00:26:00.027 ************************************ 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:26:00.027 * Looking for test storage... 00:26:00.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:26:00.027 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:00.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.028 --rc genhtml_branch_coverage=1 00:26:00.028 --rc genhtml_function_coverage=1 00:26:00.028 --rc genhtml_legend=1 00:26:00.028 --rc geninfo_all_blocks=1 00:26:00.028 --rc geninfo_unexecuted_blocks=1 00:26:00.028 00:26:00.028 ' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:00.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.028 --rc genhtml_branch_coverage=1 00:26:00.028 --rc genhtml_function_coverage=1 00:26:00.028 --rc genhtml_legend=1 00:26:00.028 --rc geninfo_all_blocks=1 00:26:00.028 --rc geninfo_unexecuted_blocks=1 00:26:00.028 00:26:00.028 ' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:00.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.028 --rc genhtml_branch_coverage=1 00:26:00.028 --rc genhtml_function_coverage=1 00:26:00.028 --rc genhtml_legend=1 00:26:00.028 --rc geninfo_all_blocks=1 00:26:00.028 --rc geninfo_unexecuted_blocks=1 00:26:00.028 00:26:00.028 ' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:00.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.028 --rc genhtml_branch_coverage=1 00:26:00.028 --rc genhtml_function_coverage=1 00:26:00.028 --rc genhtml_legend=1 00:26:00.028 --rc geninfo_all_blocks=1 00:26:00.028 --rc geninfo_unexecuted_blocks=1 00:26:00.028 00:26:00.028 ' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:00.028 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:00.029 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:00.030 #define SPDK_CONFIG_H 00:26:00.030 #define SPDK_CONFIG_AIO_FSDEV 1 00:26:00.030 #define SPDK_CONFIG_APPS 1 00:26:00.030 #define SPDK_CONFIG_ARCH native 00:26:00.030 #undef SPDK_CONFIG_ASAN 00:26:00.030 #define SPDK_CONFIG_AVAHI 1 00:26:00.030 #undef SPDK_CONFIG_CET 00:26:00.030 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:26:00.030 #define SPDK_CONFIG_COVERAGE 1 00:26:00.030 #define SPDK_CONFIG_CROSS_PREFIX 00:26:00.030 #undef SPDK_CONFIG_CRYPTO 00:26:00.030 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:00.030 #undef SPDK_CONFIG_CUSTOMOCF 00:26:00.030 #undef SPDK_CONFIG_DAOS 00:26:00.030 #define SPDK_CONFIG_DAOS_DIR 00:26:00.030 #define SPDK_CONFIG_DEBUG 1 00:26:00.030 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:00.030 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:00.030 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:00.030 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:00.030 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:00.030 #undef SPDK_CONFIG_DPDK_UADK 00:26:00.030 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:00.030 #define SPDK_CONFIG_EXAMPLES 1 00:26:00.030 #undef SPDK_CONFIG_FC 00:26:00.030 #define SPDK_CONFIG_FC_PATH 00:26:00.030 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:00.030 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:00.030 #define SPDK_CONFIG_FSDEV 1 00:26:00.030 #undef SPDK_CONFIG_FUSE 00:26:00.030 #undef SPDK_CONFIG_FUZZER 00:26:00.030 #define SPDK_CONFIG_FUZZER_LIB 00:26:00.030 #define SPDK_CONFIG_GOLANG 1 00:26:00.030 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:26:00.030 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:26:00.030 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:00.030 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:26:00.030 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:00.030 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:00.030 #undef SPDK_CONFIG_HAVE_LZ4 00:26:00.030 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:26:00.030 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:26:00.030 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:00.030 #define SPDK_CONFIG_IDXD 1 00:26:00.030 #define SPDK_CONFIG_IDXD_KERNEL 1 00:26:00.030 #undef SPDK_CONFIG_IPSEC_MB 00:26:00.030 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:00.030 #define SPDK_CONFIG_ISAL 1 00:26:00.030 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:00.030 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:00.030 #define SPDK_CONFIG_LIBDIR 00:26:00.030 #undef SPDK_CONFIG_LTO 00:26:00.030 #define SPDK_CONFIG_MAX_LCORES 128 00:26:00.030 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:26:00.030 #define SPDK_CONFIG_NVME_CUSE 1 00:26:00.030 #undef SPDK_CONFIG_OCF 00:26:00.030 #define SPDK_CONFIG_OCF_PATH 00:26:00.030 #define SPDK_CONFIG_OPENSSL_PATH 00:26:00.030 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:00.030 #define SPDK_CONFIG_PGO_DIR 00:26:00.030 #undef SPDK_CONFIG_PGO_USE 00:26:00.030 #define SPDK_CONFIG_PREFIX /usr/local 00:26:00.030 #undef SPDK_CONFIG_RAID5F 00:26:00.030 #undef SPDK_CONFIG_RBD 00:26:00.030 #define SPDK_CONFIG_RDMA 1 00:26:00.030 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:00.030 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:00.030 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:00.030 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:00.030 #define SPDK_CONFIG_SHARED 1 00:26:00.030 #undef SPDK_CONFIG_SMA 00:26:00.030 #define SPDK_CONFIG_TESTS 1 00:26:00.030 #undef SPDK_CONFIG_TSAN 00:26:00.030 #define SPDK_CONFIG_UBLK 1 00:26:00.030 #define SPDK_CONFIG_UBSAN 1 00:26:00.030 #undef SPDK_CONFIG_UNIT_TESTS 00:26:00.030 #undef SPDK_CONFIG_URING 00:26:00.030 #define SPDK_CONFIG_URING_PATH 00:26:00.030 #undef SPDK_CONFIG_URING_ZNS 00:26:00.030 #define SPDK_CONFIG_USDT 1 00:26:00.030 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:00.030 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:00.030 #undef SPDK_CONFIG_VFIO_USER 00:26:00.030 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:00.030 #define SPDK_CONFIG_VHOST 1 00:26:00.030 #define SPDK_CONFIG_VIRTIO 1 00:26:00.030 #undef SPDK_CONFIG_VTUNE 00:26:00.030 #define SPDK_CONFIG_VTUNE_DIR 00:26:00.030 #define SPDK_CONFIG_WERROR 1 00:26:00.030 #define SPDK_CONFIG_WPDK_DIR 00:26:00.030 #undef SPDK_CONFIG_XNVME 00:26:00.030 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:26:00.030 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:26:00.031 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:00.032 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71573 ]] 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71573 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:26:00.033 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.BXRXFt 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.BXRXFt/tests/target /tmp/spdk.BXRXFt 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13980798976 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5588201472 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256398336 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13980798976 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5588201472 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266290176 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=90774478848 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8928301056 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:26:00.034 * Looking for test storage... 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13980798976 00:26:00.034 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:00.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:00.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.035 --rc genhtml_branch_coverage=1 00:26:00.035 --rc genhtml_function_coverage=1 00:26:00.035 --rc genhtml_legend=1 00:26:00.035 --rc geninfo_all_blocks=1 00:26:00.035 --rc geninfo_unexecuted_blocks=1 00:26:00.035 00:26:00.035 ' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:00.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.035 --rc genhtml_branch_coverage=1 00:26:00.035 --rc genhtml_function_coverage=1 00:26:00.035 --rc genhtml_legend=1 00:26:00.035 --rc geninfo_all_blocks=1 00:26:00.035 --rc geninfo_unexecuted_blocks=1 00:26:00.035 00:26:00.035 ' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:00.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.035 --rc genhtml_branch_coverage=1 00:26:00.035 --rc genhtml_function_coverage=1 00:26:00.035 --rc genhtml_legend=1 00:26:00.035 --rc geninfo_all_blocks=1 00:26:00.035 --rc geninfo_unexecuted_blocks=1 00:26:00.035 00:26:00.035 ' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:00.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.035 --rc genhtml_branch_coverage=1 00:26:00.035 --rc genhtml_function_coverage=1 00:26:00.035 --rc genhtml_legend=1 00:26:00.035 --rc geninfo_all_blocks=1 00:26:00.035 --rc geninfo_unexecuted_blocks=1 00:26:00.035 00:26:00.035 ' 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.035 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:00.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@223 -- # create_target_ns 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # return 0 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:00.036 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up target0 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:00.037 10.0.0.1 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:00.037 10.0.0.2 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.037 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up target1 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:00.038 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772163 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:00.039 10.0.0.3 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772164 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:00.039 10.0.0.4 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:00.039 11:09:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator0 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:00.039 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:00.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:26:00.040 00:26:00.040 --- 10.0.0.1 ping statistics --- 00:26:00.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.040 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target0 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target0 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:00.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:26:00.040 00:26:00.040 --- 10.0.0.2 ping statistics --- 00:26:00.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.040 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:00.040 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:00.040 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:26:00.040 00:26:00.040 --- 10.0.0.3 ping statistics --- 00:26:00.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.040 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:00.040 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:00.040 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:26:00.040 00:26:00.040 --- 10.0.0.4 ping statistics --- 00:26:00.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.040 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # return 0 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:00.040 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target1 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:00.041 ' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.041 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:26:00.041 ************************************ 00:26:00.041 START TEST nvmf_filesystem_no_in_capsule 00:26:00.041 ************************************ 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=71774 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 71774 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71774 ']' 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.042 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.042 [2024-12-05 11:09:24.299066] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:00.042 [2024-12-05 11:09:24.299539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.042 [2024-12-05 11:09:24.460701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:00.042 [2024-12-05 11:09:24.530931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.042 [2024-12-05 11:09:24.531023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.042 [2024-12-05 11:09:24.531041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.042 [2024-12-05 11:09:24.531055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.042 [2024-12-05 11:09:24.531066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.042 [2024-12-05 11:09:24.532275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.042 [2024-12-05 11:09:24.532340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.042 [2024-12-05 11:09:24.532515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.042 [2024-12-05 11:09:24.532524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.607 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.607 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:26:00.607 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:00.607 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.607 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.864 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.864 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:26:00.864 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:26:00.864 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.864 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.864 [2024-12-05 11:09:25.297361] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.864 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.864 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.865 Malloc1 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.865 [2024-12-05 11:09:25.448843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.865 { 00:26:00.865 "aliases": [ 00:26:00.865 "7538690c-de57-4029-bbfb-238ae4fa090f" 00:26:00.865 ], 00:26:00.865 "assigned_rate_limits": { 00:26:00.865 "r_mbytes_per_sec": 0, 00:26:00.865 "rw_ios_per_sec": 0, 00:26:00.865 "rw_mbytes_per_sec": 0, 00:26:00.865 "w_mbytes_per_sec": 0 00:26:00.865 }, 00:26:00.865 "block_size": 512, 00:26:00.865 "claim_type": "exclusive_write", 00:26:00.865 "claimed": true, 00:26:00.865 "driver_specific": {}, 00:26:00.865 "memory_domains": [ 00:26:00.865 { 00:26:00.865 "dma_device_id": "system", 00:26:00.865 "dma_device_type": 1 00:26:00.865 }, 00:26:00.865 { 00:26:00.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.865 "dma_device_type": 2 00:26:00.865 } 00:26:00.865 ], 00:26:00.865 "name": "Malloc1", 00:26:00.865 "num_blocks": 1048576, 00:26:00.865 "product_name": "Malloc disk", 00:26:00.865 "supported_io_types": { 00:26:00.865 "abort": true, 00:26:00.865 "compare": false, 00:26:00.865 "compare_and_write": false, 00:26:00.865 "copy": true, 00:26:00.865 "flush": true, 00:26:00.865 "get_zone_info": false, 00:26:00.865 "nvme_admin": false, 00:26:00.865 "nvme_io": false, 00:26:00.865 "nvme_io_md": false, 00:26:00.865 "nvme_iov_md": false, 00:26:00.865 "read": true, 00:26:00.865 "reset": true, 00:26:00.865 "seek_data": false, 00:26:00.865 "seek_hole": false, 00:26:00.865 "unmap": true, 00:26:00.865 "write": true, 00:26:00.865 "write_zeroes": true, 00:26:00.865 "zcopy": true, 00:26:00.865 "zone_append": false, 00:26:00.865 "zone_management": false 00:26:00.865 }, 00:26:00.865 "uuid": "7538690c-de57-4029-bbfb-238ae4fa090f", 00:26:00.865 "zoned": false 00:26:00.865 } 00:26:00.865 ]' 00:26:00.865 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.122 11:09:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:26:03.646 11:09:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:04.584 ************************************ 00:26:04.584 START TEST filesystem_ext4 00:26:04.584 ************************************ 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:26:04.584 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:26:04.584 mke2fs 1.47.0 (5-Feb-2023) 00:26:04.584 Discarding device blocks: 0/522240 done 00:26:04.584 Creating filesystem with 522240 1k blocks and 130560 inodes 00:26:04.584 Filesystem UUID: a0702c7e-7bcf-427d-afb5-1ad0112dc335 00:26:04.584 Superblock backups stored on blocks: 00:26:04.584 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:26:04.584 00:26:04.584 Allocating group tables: 0/64 done 00:26:04.584 Writing inode tables: 0/64 done 00:26:04.584 Creating journal (8192 blocks): done 00:26:04.584 Writing superblocks and filesystem accounting information: 0/64 done 00:26:04.584 00:26:04.584 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:26:04.584 11:09:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71774 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:26:11.152 ************************************ 00:26:11.152 END TEST filesystem_ext4 00:26:11.152 ************************************ 00:26:11.152 00:26:11.152 real 0m5.686s 00:26:11.152 user 0m0.030s 00:26:11.152 sys 0m0.062s 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:11.152 ************************************ 00:26:11.152 START TEST filesystem_btrfs 00:26:11.152 ************************************ 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:26:11.152 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:26:11.152 btrfs-progs v6.8.1 00:26:11.152 See https://btrfs.readthedocs.io for more information. 00:26:11.152 00:26:11.152 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:26:11.152 NOTE: several default settings have changed in version 5.15, please make sure 00:26:11.152 this does not affect your deployments: 00:26:11.152 - DUP for metadata (-m dup) 00:26:11.152 - enabled no-holes (-O no-holes) 00:26:11.152 - enabled free-space-tree (-R free-space-tree) 00:26:11.152 00:26:11.152 Label: (null) 00:26:11.152 UUID: c69394f7-01d4-4b51-820f-eb87884c7a5c 00:26:11.152 Node size: 16384 00:26:11.152 Sector size: 4096 (CPU page size: 4096) 00:26:11.152 Filesystem size: 510.00MiB 00:26:11.152 Block group profiles: 00:26:11.152 Data: single 8.00MiB 00:26:11.152 Metadata: DUP 32.00MiB 00:26:11.152 System: DUP 8.00MiB 00:26:11.152 SSD detected: yes 00:26:11.152 Zoned device: no 00:26:11.152 Features: extref, skinny-metadata, no-holes, free-space-tree 00:26:11.153 Checksum: crc32c 00:26:11.153 Number of devices: 1 00:26:11.153 Devices: 00:26:11.153 ID SIZE PATH 00:26:11.153 1 510.00MiB /dev/nvme0n1p1 00:26:11.153 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71774 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:26:11.153 ************************************ 00:26:11.153 END TEST filesystem_btrfs 00:26:11.153 ************************************ 00:26:11.153 00:26:11.153 real 0m0.273s 00:26:11.153 user 0m0.024s 00:26:11.153 sys 0m0.074s 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.153 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:11.153 ************************************ 00:26:11.153 START TEST filesystem_xfs 00:26:11.153 ************************************ 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:26:11.153 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:26:11.153 = sectsz=512 attr=2, projid32bit=1 00:26:11.153 = crc=1 finobt=1, sparse=1, rmapbt=0 00:26:11.153 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:26:11.153 data = bsize=4096 blocks=130560, imaxpct=25 00:26:11.153 = sunit=0 swidth=0 blks 00:26:11.153 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:26:11.153 log =internal log bsize=4096 blocks=16384, version=2 00:26:11.153 = sectsz=512 sunit=0 blks, lazy-count=1 00:26:11.153 realtime =none extsz=4096 blocks=0, rtextents=0 00:26:11.153 Discarding blocks...Done. 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:26:11.153 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71774 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:26:13.736 ************************************ 00:26:13.736 END TEST filesystem_xfs 00:26:13.736 ************************************ 00:26:13.736 00:26:13.736 real 0m3.177s 00:26:13.736 user 0m0.032s 00:26:13.736 sys 0m0.064s 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:13.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71774 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71774 ']' 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71774 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71774 00:26:13.995 killing process with pid 71774 00:26:13.995 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.995 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.995 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71774' 00:26:13.995 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71774 00:26:13.995 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71774 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:26:14.253 00:26:14.253 real 0m14.557s 00:26:14.253 user 0m54.617s 00:26:14.253 sys 0m2.989s 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.253 ************************************ 00:26:14.253 END TEST nvmf_filesystem_no_in_capsule 00:26:14.253 ************************************ 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:26:14.253 ************************************ 00:26:14.253 START TEST nvmf_filesystem_in_capsule 00:26:14.253 ************************************ 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.253 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=72140 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 72140 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 72140 ']' 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.254 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:14.254 [2024-12-05 11:09:38.886634] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:14.254 [2024-12-05 11:09:38.887009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.511 [2024-12-05 11:09:39.057143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:14.511 [2024-12-05 11:09:39.118163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.511 [2024-12-05 11:09:39.118225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.511 [2024-12-05 11:09:39.118238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.511 [2024-12-05 11:09:39.118248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.511 [2024-12-05 11:09:39.118257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.511 [2024-12-05 11:09:39.119228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.512 [2024-12-05 11:09:39.119366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.512 [2024-12-05 11:09:39.119399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.512 [2024-12-05 11:09:39.119401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:14.769 [2024-12-05 11:09:39.277088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:14.769 Malloc1 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.769 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:15.027 [2024-12-05 11:09:39.433342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:15.027 { 00:26:15.027 "aliases": [ 00:26:15.027 "95dad9fb-91d0-4010-bb70-9416ce03dd25" 00:26:15.027 ], 00:26:15.027 "assigned_rate_limits": { 00:26:15.027 "r_mbytes_per_sec": 0, 00:26:15.027 "rw_ios_per_sec": 0, 00:26:15.027 "rw_mbytes_per_sec": 0, 00:26:15.027 "w_mbytes_per_sec": 0 00:26:15.027 }, 00:26:15.027 "block_size": 512, 00:26:15.027 "claim_type": "exclusive_write", 00:26:15.027 "claimed": true, 00:26:15.027 "driver_specific": {}, 00:26:15.027 "memory_domains": [ 00:26:15.027 { 00:26:15.027 "dma_device_id": "system", 00:26:15.027 "dma_device_type": 1 00:26:15.027 }, 00:26:15.027 { 00:26:15.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.027 "dma_device_type": 2 00:26:15.027 } 00:26:15.027 ], 00:26:15.027 "name": "Malloc1", 00:26:15.027 "num_blocks": 1048576, 00:26:15.027 "product_name": "Malloc disk", 00:26:15.027 "supported_io_types": { 00:26:15.027 "abort": true, 00:26:15.027 "compare": false, 00:26:15.027 "compare_and_write": false, 00:26:15.027 "copy": true, 00:26:15.027 "flush": true, 00:26:15.027 "get_zone_info": false, 00:26:15.027 "nvme_admin": false, 00:26:15.027 "nvme_io": false, 00:26:15.027 "nvme_io_md": false, 00:26:15.027 "nvme_iov_md": false, 00:26:15.027 "read": true, 00:26:15.027 "reset": true, 00:26:15.027 "seek_data": false, 00:26:15.027 "seek_hole": false, 00:26:15.027 "unmap": true, 00:26:15.027 "write": true, 00:26:15.027 "write_zeroes": true, 00:26:15.027 "zcopy": true, 00:26:15.027 "zone_append": false, 00:26:15.027 "zone_management": false 00:26:15.027 }, 00:26:15.027 "uuid": "95dad9fb-91d0-4010-bb70-9416ce03dd25", 00:26:15.027 "zoned": false 00:26:15.027 } 00:26:15.027 ]' 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:26:15.027 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:15.286 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:26:15.286 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.286 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.286 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.286 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:26:17.184 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:26:17.440 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:18.370 ************************************ 00:26:18.370 START TEST filesystem_in_capsule_ext4 00:26:18.370 ************************************ 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:26:18.370 mke2fs 1.47.0 (5-Feb-2023) 00:26:18.370 Discarding device blocks: 0/522240 done 00:26:18.370 Creating filesystem with 522240 1k blocks and 130560 inodes 00:26:18.370 Filesystem UUID: 566463bc-a244-475d-afaa-0cc395bebbf3 00:26:18.370 Superblock backups stored on blocks: 00:26:18.370 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:26:18.370 00:26:18.370 Allocating group tables: 0/64 done 00:26:18.370 Writing inode tables: 0/64 done 00:26:18.370 Creating journal (8192 blocks): done 00:26:18.370 Writing superblocks and filesystem accounting information: 0/64 done 00:26:18.370 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:26:18.370 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72140 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:26:24.923 ************************************ 00:26:24.923 END TEST filesystem_in_capsule_ext4 00:26:24.923 ************************************ 00:26:24.923 00:26:24.923 real 0m5.567s 00:26:24.923 user 0m0.024s 00:26:24.923 sys 0m0.067s 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.923 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:24.923 ************************************ 00:26:24.923 START TEST filesystem_in_capsule_btrfs 00:26:24.924 ************************************ 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:26:24.924 btrfs-progs v6.8.1 00:26:24.924 See https://btrfs.readthedocs.io for more information. 00:26:24.924 00:26:24.924 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:26:24.924 NOTE: several default settings have changed in version 5.15, please make sure 00:26:24.924 this does not affect your deployments: 00:26:24.924 - DUP for metadata (-m dup) 00:26:24.924 - enabled no-holes (-O no-holes) 00:26:24.924 - enabled free-space-tree (-R free-space-tree) 00:26:24.924 00:26:24.924 Label: (null) 00:26:24.924 UUID: 2a83c4f0-e399-49e8-9e1f-5fc278c8b374 00:26:24.924 Node size: 16384 00:26:24.924 Sector size: 4096 (CPU page size: 4096) 00:26:24.924 Filesystem size: 510.00MiB 00:26:24.924 Block group profiles: 00:26:24.924 Data: single 8.00MiB 00:26:24.924 Metadata: DUP 32.00MiB 00:26:24.924 System: DUP 8.00MiB 00:26:24.924 SSD detected: yes 00:26:24.924 Zoned device: no 00:26:24.924 Features: extref, skinny-metadata, no-holes, free-space-tree 00:26:24.924 Checksum: crc32c 00:26:24.924 Number of devices: 1 00:26:24.924 Devices: 00:26:24.924 ID SIZE PATH 00:26:24.924 1 510.00MiB /dev/nvme0n1p1 00:26:24.924 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72140 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:26:24.924 ************************************ 00:26:24.924 END TEST filesystem_in_capsule_btrfs 00:26:24.924 ************************************ 00:26:24.924 00:26:24.924 real 0m0.254s 00:26:24.924 user 0m0.031s 00:26:24.924 sys 0m0.084s 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:24.924 ************************************ 00:26:24.924 START TEST filesystem_in_capsule_xfs 00:26:24.924 ************************************ 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:26:24.924 11:09:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:26:24.924 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:26:24.924 = sectsz=512 attr=2, projid32bit=1 00:26:24.924 = crc=1 finobt=1, sparse=1, rmapbt=0 00:26:24.924 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:26:24.924 data = bsize=4096 blocks=130560, imaxpct=25 00:26:24.924 = sunit=0 swidth=0 blks 00:26:24.924 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:26:24.924 log =internal log bsize=4096 blocks=16384, version=2 00:26:24.924 = sectsz=512 sunit=0 blks, lazy-count=1 00:26:24.924 realtime =none extsz=4096 blocks=0, rtextents=0 00:26:24.924 Discarding blocks...Done. 00:26:24.924 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:26:24.924 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72140 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:26:26.825 ************************************ 00:26:26.825 END TEST filesystem_in_capsule_xfs 00:26:26.825 ************************************ 00:26:26.825 00:26:26.825 real 0m2.614s 00:26:26.825 user 0m0.023s 00:26:26.825 sys 0m0.057s 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:26:26.825 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:27.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72140 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 72140 ']' 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 72140 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72140 00:26:27.082 killing process with pid 72140 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72140' 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 72140 00:26:27.082 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 72140 00:26:27.355 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:26:27.355 00:26:27.355 real 0m13.134s 00:26:27.355 user 0m48.787s 00:26:27.355 sys 0m3.102s 00:26:27.355 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.355 ************************************ 00:26:27.355 END TEST nvmf_filesystem_in_capsule 00:26:27.355 ************************************ 00:26:27.355 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:27.614 rmmod nvme_tcp 00:26:27.614 rmmod nvme_fabrics 00:26:27.614 rmmod nvme_keyring 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@254 -- # local dev 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:27.614 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # continue 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # continue 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@274 -- # iptr 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-save 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-restore 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:27.872 00:26:27.872 real 0m29.144s 00:26:27.872 user 1m43.990s 00:26:27.872 sys 0m6.776s 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:26:27.872 ************************************ 00:26:27.872 END TEST nvmf_filesystem 00:26:27.872 ************************************ 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:27.872 ************************************ 00:26:27.872 START TEST nvmf_target_discovery 00:26:27.872 ************************************ 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:26:27.872 * Looking for test storage... 00:26:27.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.872 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:28.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.132 --rc genhtml_branch_coverage=1 00:26:28.132 --rc genhtml_function_coverage=1 00:26:28.132 --rc genhtml_legend=1 00:26:28.132 --rc geninfo_all_blocks=1 00:26:28.132 --rc geninfo_unexecuted_blocks=1 00:26:28.132 00:26:28.132 ' 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:28.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.132 --rc genhtml_branch_coverage=1 00:26:28.132 --rc genhtml_function_coverage=1 00:26:28.132 --rc genhtml_legend=1 00:26:28.132 --rc geninfo_all_blocks=1 00:26:28.132 --rc geninfo_unexecuted_blocks=1 00:26:28.132 00:26:28.132 ' 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:28.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.132 --rc genhtml_branch_coverage=1 00:26:28.132 --rc genhtml_function_coverage=1 00:26:28.132 --rc genhtml_legend=1 00:26:28.132 --rc geninfo_all_blocks=1 00:26:28.132 --rc geninfo_unexecuted_blocks=1 00:26:28.132 00:26:28.132 ' 00:26:28.132 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:28.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.133 --rc genhtml_branch_coverage=1 00:26:28.133 --rc genhtml_function_coverage=1 00:26:28.133 --rc genhtml_legend=1 00:26:28.133 --rc geninfo_all_blocks=1 00:26:28.133 --rc geninfo_unexecuted_blocks=1 00:26:28.133 00:26:28.133 ' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:28.133 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # return 0 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:28.133 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:28.134 10.0.0.1 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:28.134 10.0.0.2 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:28.134 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:28.135 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:28.395 10.0.0.3 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:28.395 10.0.0.4 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:28.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:26:28.395 00:26:28.395 --- 10.0.0.1 ping statistics --- 00:26:28.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.395 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target0 00:26:28.395 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:28.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:26:28.396 00:26:28.396 --- 10.0.0.2 ping statistics --- 00:26:28.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.396 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:28.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:28.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:26:28.396 00:26:28.396 --- 10.0.0.3 ping statistics --- 00:26:28.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.396 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:28.396 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:28.396 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:28.396 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:26:28.396 00:26:28.396 --- 10.0.0.4 ping statistics --- 00:26:28.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.396 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # return 0 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:28.396 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target0 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target1 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:28.655 ' 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.655 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=72716 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 72716 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72716 ']' 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.656 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.656 [2024-12-05 11:09:53.189756] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:28.656 [2024-12-05 11:09:53.189884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.915 [2024-12-05 11:09:53.346498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.915 [2024-12-05 11:09:53.407709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.915 [2024-12-05 11:09:53.407772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.915 [2024-12-05 11:09:53.407784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.915 [2024-12-05 11:09:53.407794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.915 [2024-12-05 11:09:53.407803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.915 [2024-12-05 11:09:53.408795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.915 [2024-12-05 11:09:53.408843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.915 [2024-12-05 11:09:53.408887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.915 [2024-12-05 11:09:53.408891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 [2024-12-05 11:09:54.236307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 Null1 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 [2024-12-05 11:09:54.284508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 Null2 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 Null3 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 Null4 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.851 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.852 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 4420 00:26:30.110 00:26:30.110 Discovery Log Number of Records 6, Generation counter 6 00:26:30.110 =====Discovery Log Entry 0====== 00:26:30.110 trtype: tcp 00:26:30.110 adrfam: ipv4 00:26:30.110 subtype: current discovery subsystem 00:26:30.110 treq: not required 00:26:30.110 portid: 0 00:26:30.110 trsvcid: 4420 00:26:30.110 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:30.110 traddr: 10.0.0.2 00:26:30.110 eflags: explicit discovery connections, duplicate discovery information 00:26:30.110 sectype: none 00:26:30.110 =====Discovery Log Entry 1====== 00:26:30.110 trtype: tcp 00:26:30.110 adrfam: ipv4 00:26:30.110 subtype: nvme subsystem 00:26:30.110 treq: not required 00:26:30.110 portid: 0 00:26:30.110 trsvcid: 4420 00:26:30.110 subnqn: nqn.2016-06.io.spdk:cnode1 00:26:30.110 traddr: 10.0.0.2 00:26:30.110 eflags: none 00:26:30.110 sectype: none 00:26:30.110 =====Discovery Log Entry 2====== 00:26:30.110 trtype: tcp 00:26:30.110 adrfam: ipv4 00:26:30.110 subtype: nvme subsystem 00:26:30.110 treq: not required 00:26:30.110 portid: 0 00:26:30.110 trsvcid: 4420 00:26:30.110 subnqn: nqn.2016-06.io.spdk:cnode2 00:26:30.110 traddr: 10.0.0.2 00:26:30.110 eflags: none 00:26:30.110 sectype: none 00:26:30.110 =====Discovery Log Entry 3====== 00:26:30.110 trtype: tcp 00:26:30.110 adrfam: ipv4 00:26:30.110 subtype: nvme subsystem 00:26:30.110 treq: not required 00:26:30.110 portid: 0 00:26:30.110 trsvcid: 4420 00:26:30.110 subnqn: nqn.2016-06.io.spdk:cnode3 00:26:30.110 traddr: 10.0.0.2 00:26:30.110 eflags: none 00:26:30.110 sectype: none 00:26:30.110 =====Discovery Log Entry 4====== 00:26:30.110 trtype: tcp 00:26:30.110 adrfam: ipv4 00:26:30.110 subtype: nvme subsystem 00:26:30.110 treq: not required 00:26:30.110 portid: 0 00:26:30.110 trsvcid: 4420 00:26:30.110 subnqn: nqn.2016-06.io.spdk:cnode4 00:26:30.110 traddr: 10.0.0.2 00:26:30.110 eflags: none 00:26:30.110 sectype: none 00:26:30.110 =====Discovery Log Entry 5====== 00:26:30.110 trtype: tcp 00:26:30.110 adrfam: ipv4 00:26:30.110 subtype: discovery subsystem referral 00:26:30.110 treq: not required 00:26:30.110 portid: 0 00:26:30.110 trsvcid: 4430 00:26:30.110 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:30.110 traddr: 10.0.0.2 00:26:30.110 eflags: none 00:26:30.110 sectype: none 00:26:30.110 Perform nvmf subsystem discovery via RPC 00:26:30.110 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:26:30.110 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:26:30.110 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.110 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.110 [ 00:26:30.110 { 00:26:30.110 "allow_any_host": true, 00:26:30.110 "hosts": [], 00:26:30.110 "listen_addresses": [ 00:26:30.110 { 00:26:30.110 "adrfam": "IPv4", 00:26:30.110 "traddr": "10.0.0.2", 00:26:30.110 "trsvcid": "4420", 00:26:30.110 "trtype": "TCP" 00:26:30.110 } 00:26:30.110 ], 00:26:30.110 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:30.110 "subtype": "Discovery" 00:26:30.110 }, 00:26:30.110 { 00:26:30.110 "allow_any_host": true, 00:26:30.110 "hosts": [], 00:26:30.110 "listen_addresses": [ 00:26:30.110 { 00:26:30.110 "adrfam": "IPv4", 00:26:30.110 "traddr": "10.0.0.2", 00:26:30.110 "trsvcid": "4420", 00:26:30.110 "trtype": "TCP" 00:26:30.110 } 00:26:30.110 ], 00:26:30.110 "max_cntlid": 65519, 00:26:30.110 "max_namespaces": 32, 00:26:30.110 "min_cntlid": 1, 00:26:30.110 "model_number": "SPDK bdev Controller", 00:26:30.110 "namespaces": [ 00:26:30.110 { 00:26:30.110 "bdev_name": "Null1", 00:26:30.110 "name": "Null1", 00:26:30.110 "nguid": "DB5E2D300F03458BAFA6C81DD4CE5048", 00:26:30.110 "nsid": 1, 00:26:30.110 "uuid": "db5e2d30-0f03-458b-afa6-c81dd4ce5048" 00:26:30.110 } 00:26:30.110 ], 00:26:30.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.110 "serial_number": "SPDK00000000000001", 00:26:30.110 "subtype": "NVMe" 00:26:30.110 }, 00:26:30.110 { 00:26:30.110 "allow_any_host": true, 00:26:30.110 "hosts": [], 00:26:30.110 "listen_addresses": [ 00:26:30.110 { 00:26:30.110 "adrfam": "IPv4", 00:26:30.110 "traddr": "10.0.0.2", 00:26:30.110 "trsvcid": "4420", 00:26:30.110 "trtype": "TCP" 00:26:30.110 } 00:26:30.110 ], 00:26:30.110 "max_cntlid": 65519, 00:26:30.110 "max_namespaces": 32, 00:26:30.110 "min_cntlid": 1, 00:26:30.110 "model_number": "SPDK bdev Controller", 00:26:30.110 "namespaces": [ 00:26:30.110 { 00:26:30.110 "bdev_name": "Null2", 00:26:30.110 "name": "Null2", 00:26:30.110 "nguid": "43D2045DCD4C43C99E3D84AD6E34C4B0", 00:26:30.110 "nsid": 1, 00:26:30.110 "uuid": "43d2045d-cd4c-43c9-9e3d-84ad6e34c4b0" 00:26:30.110 } 00:26:30.110 ], 00:26:30.110 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:26:30.110 "serial_number": "SPDK00000000000002", 00:26:30.110 "subtype": "NVMe" 00:26:30.110 }, 00:26:30.110 { 00:26:30.110 "allow_any_host": true, 00:26:30.110 "hosts": [], 00:26:30.110 "listen_addresses": [ 00:26:30.110 { 00:26:30.110 "adrfam": "IPv4", 00:26:30.110 "traddr": "10.0.0.2", 00:26:30.110 "trsvcid": "4420", 00:26:30.110 "trtype": "TCP" 00:26:30.111 } 00:26:30.111 ], 00:26:30.111 "max_cntlid": 65519, 00:26:30.111 "max_namespaces": 32, 00:26:30.111 "min_cntlid": 1, 00:26:30.111 "model_number": "SPDK bdev Controller", 00:26:30.111 "namespaces": [ 00:26:30.111 { 00:26:30.111 "bdev_name": "Null3", 00:26:30.111 "name": "Null3", 00:26:30.111 "nguid": "256D4A4E17D543CA8C75D84F36D32AFD", 00:26:30.111 "nsid": 1, 00:26:30.111 "uuid": "256d4a4e-17d5-43ca-8c75-d84f36d32afd" 00:26:30.111 } 00:26:30.111 ], 00:26:30.111 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:26:30.111 "serial_number": "SPDK00000000000003", 00:26:30.111 "subtype": "NVMe" 00:26:30.111 }, 00:26:30.111 { 00:26:30.111 "allow_any_host": true, 00:26:30.111 "hosts": [], 00:26:30.111 "listen_addresses": [ 00:26:30.111 { 00:26:30.111 "adrfam": "IPv4", 00:26:30.111 "traddr": "10.0.0.2", 00:26:30.111 "trsvcid": "4420", 00:26:30.111 "trtype": "TCP" 00:26:30.111 } 00:26:30.111 ], 00:26:30.111 "max_cntlid": 65519, 00:26:30.111 "max_namespaces": 32, 00:26:30.111 "min_cntlid": 1, 00:26:30.111 "model_number": "SPDK bdev Controller", 00:26:30.111 "namespaces": [ 00:26:30.111 { 00:26:30.111 "bdev_name": "Null4", 00:26:30.111 "name": "Null4", 00:26:30.111 "nguid": "2CD728DC9F08420C98A29772765A47C8", 00:26:30.111 "nsid": 1, 00:26:30.111 "uuid": "2cd728dc-9f08-420c-98a2-9772765a47c8" 00:26:30.111 } 00:26:30.111 ], 00:26:30.111 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:26:30.111 "serial_number": "SPDK00000000000004", 00:26:30.111 "subtype": "NVMe" 00:26:30.111 } 00:26:30.111 ] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:30.111 rmmod nvme_tcp 00:26:30.111 rmmod nvme_fabrics 00:26:30.111 rmmod nvme_keyring 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 72716 ']' 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 72716 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72716 ']' 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72716 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.111 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72716 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72716' 00:26:30.369 killing process with pid 72716 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72716 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72716 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@254 -- # local dev 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:30.369 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:30.369 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:30.369 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:30.369 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:30.369 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:30.369 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:30.370 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:30.370 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:30.628 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # continue 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # continue 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@274 -- # iptr 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-save 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:26:30.629 00:26:30.629 real 0m2.851s 00:26:30.629 user 0m6.805s 00:26:30.629 sys 0m0.892s 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.629 ************************************ 00:26:30.629 END TEST nvmf_target_discovery 00:26:30.629 ************************************ 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:30.629 ************************************ 00:26:30.629 START TEST nvmf_referrals 00:26:30.629 ************************************ 00:26:30.629 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:26:30.887 * Looking for test storage... 00:26:30.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:30.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.887 --rc genhtml_branch_coverage=1 00:26:30.887 --rc genhtml_function_coverage=1 00:26:30.887 --rc genhtml_legend=1 00:26:30.887 --rc geninfo_all_blocks=1 00:26:30.887 --rc geninfo_unexecuted_blocks=1 00:26:30.887 00:26:30.887 ' 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:30.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.887 --rc genhtml_branch_coverage=1 00:26:30.887 --rc genhtml_function_coverage=1 00:26:30.887 --rc genhtml_legend=1 00:26:30.887 --rc geninfo_all_blocks=1 00:26:30.887 --rc geninfo_unexecuted_blocks=1 00:26:30.887 00:26:30.887 ' 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:30.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.887 --rc genhtml_branch_coverage=1 00:26:30.887 --rc genhtml_function_coverage=1 00:26:30.887 --rc genhtml_legend=1 00:26:30.887 --rc geninfo_all_blocks=1 00:26:30.887 --rc geninfo_unexecuted_blocks=1 00:26:30.887 00:26:30.887 ' 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:30.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.887 --rc genhtml_branch_coverage=1 00:26:30.887 --rc genhtml_function_coverage=1 00:26:30.887 --rc genhtml_legend=1 00:26:30.887 --rc geninfo_all_blocks=1 00:26:30.887 --rc geninfo_unexecuted_blocks=1 00:26:30.887 00:26:30.887 ' 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.887 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:30.888 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@223 -- # create_target_ns 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # return 0 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:30.888 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:30.889 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up target0 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:31.146 10.0.0.1 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:31.146 10.0.0.2 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:31.146 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up target1 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772163 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:31.147 10.0.0.3 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772164 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:31.147 10.0.0.4 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:31.147 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator0 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:31.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:26:31.405 00:26:31.405 --- 10.0.0.1 ping statistics --- 00:26:31.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.405 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:31.405 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:31.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:26:31.406 00:26:31.406 --- 10.0.0.2 ping statistics --- 00:26:31.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.406 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:31.406 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:31.406 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:26:31.406 00:26:31.406 --- 10.0.0.3 ping statistics --- 00:26:31.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.406 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:31.406 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:31.406 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:26:31.406 00:26:31.406 --- 10.0.0.4 ping statistics --- 00:26:31.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.406 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # return 0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target0 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:31.406 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target1 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target1 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:31.406 ' 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.406 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=73002 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 73002 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 73002 ']' 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.407 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.664 [2024-12-05 11:09:56.110641] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:31.664 [2024-12-05 11:09:56.110753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.664 [2024-12-05 11:09:56.273924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.922 [2024-12-05 11:09:56.342047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.922 [2024-12-05 11:09:56.342123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.922 [2024-12-05 11:09:56.342139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.922 [2024-12-05 11:09:56.342153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.922 [2024-12-05 11:09:56.342164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.922 [2024-12-05 11:09:56.343304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.922 [2024-12-05 11:09:56.343350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.922 [2024-12-05 11:09:56.343449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:31.922 [2024-12-05 11:09:56.343453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.922 [2024-12-05 11:09:56.512710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.922 [2024-12-05 11:09:56.526296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:26:31.922 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:32.180 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:26:32.181 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.439 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:26:32.439 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:26:32.439 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:26:32.439 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:26:32.439 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:32.439 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:26:32.439 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:26:32.439 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:32.698 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:26:32.956 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.215 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:26:33.474 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:33.474 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:33.474 rmmod nvme_tcp 00:26:33.474 rmmod nvme_fabrics 00:26:33.474 rmmod nvme_keyring 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 73002 ']' 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 73002 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 73002 ']' 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 73002 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73002 00:26:33.734 killing process with pid 73002 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73002' 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 73002 00:26:33.734 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 73002 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@254 -- # local dev 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # continue 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # continue 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@274 -- # iptr 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-save 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-restore 00:26:33.994 00:26:33.994 real 0m3.347s 00:26:33.994 user 0m9.261s 00:26:33.994 sys 0m1.117s 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:26:33.994 ************************************ 00:26:33.994 END TEST nvmf_referrals 00:26:33.994 ************************************ 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:33.994 ************************************ 00:26:33.994 START TEST nvmf_connect_disconnect 00:26:33.994 ************************************ 00:26:33.994 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:26:34.255 * Looking for test storage... 00:26:34.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:34.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.255 --rc genhtml_branch_coverage=1 00:26:34.255 --rc genhtml_function_coverage=1 00:26:34.255 --rc genhtml_legend=1 00:26:34.255 --rc geninfo_all_blocks=1 00:26:34.255 --rc geninfo_unexecuted_blocks=1 00:26:34.255 00:26:34.255 ' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:34.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.255 --rc genhtml_branch_coverage=1 00:26:34.255 --rc genhtml_function_coverage=1 00:26:34.255 --rc genhtml_legend=1 00:26:34.255 --rc geninfo_all_blocks=1 00:26:34.255 --rc geninfo_unexecuted_blocks=1 00:26:34.255 00:26:34.255 ' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:34.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.255 --rc genhtml_branch_coverage=1 00:26:34.255 --rc genhtml_function_coverage=1 00:26:34.255 --rc genhtml_legend=1 00:26:34.255 --rc geninfo_all_blocks=1 00:26:34.255 --rc geninfo_unexecuted_blocks=1 00:26:34.255 00:26:34.255 ' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:34.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.255 --rc genhtml_branch_coverage=1 00:26:34.255 --rc genhtml_function_coverage=1 00:26:34.255 --rc genhtml_legend=1 00:26:34.255 --rc geninfo_all_blocks=1 00:26:34.255 --rc geninfo_unexecuted_blocks=1 00:26:34.255 00:26:34.255 ' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:34.255 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:34.256 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@223 -- # create_target_ns 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # return 0 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:34.256 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up target0 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:34.531 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:34.532 10.0.0.1 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:34.532 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:34.532 10.0.0.2 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up target1 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:34.532 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772163 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:34.533 10.0.0.3 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772164 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:34.533 10.0.0.4 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.533 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:34.793 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator0 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:34.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:26:34.794 00:26:34.794 --- 10.0.0.1 ping statistics --- 00:26:34.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.794 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target0 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target0 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:34.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:26:34.794 00:26:34.794 --- 10.0.0.2 ping statistics --- 00:26:34.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.794 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:34.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:34.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:26:34.794 00:26:34.794 --- 10.0.0.3 ping statistics --- 00:26:34.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.794 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:34.794 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:34.794 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.121 ms 00:26:34.794 00:26:34.794 --- 10.0.0.4 ping statistics --- 00:26:34.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.794 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:26:34.794 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # return 0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target0 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target1 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:34.795 ' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=73347 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 73347 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 73347 ']' 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.795 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.055 [2024-12-05 11:09:59.476949] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:35.055 [2024-12-05 11:09:59.477046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.055 [2024-12-05 11:09:59.628576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.055 [2024-12-05 11:09:59.688277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.055 [2024-12-05 11:09:59.688353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.055 [2024-12-05 11:09:59.688366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.055 [2024-12-05 11:09:59.688376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.055 [2024-12-05 11:09:59.688384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.055 [2024-12-05 11:09:59.689357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.055 [2024-12-05 11:09:59.689525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.055 [2024-12-05 11:09:59.690239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.055 [2024-12-05 11:09:59.690241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.314 [2024-12-05 11:09:59.869247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.314 [2024-12-05 11:09:59.939197] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:26:35.314 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:26:37.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:40.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:42.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:44.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:47.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:47.516 rmmod nvme_tcp 00:26:47.516 rmmod nvme_fabrics 00:26:47.516 rmmod nvme_keyring 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 73347 ']' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 73347 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 73347 ']' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 73347 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73347 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:47.516 killing process with pid 73347 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73347' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 73347 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 73347 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@254 -- # local dev 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:47.516 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # continue 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # continue 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@274 -- # iptr 00:26:47.516 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:26:47.517 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:47.517 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:26:47.517 00:26:47.517 real 0m13.506s 00:26:47.517 user 0m47.203s 00:26:47.517 sys 0m3.080s 00:26:47.517 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:47.517 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:47.517 ************************************ 00:26:47.517 END TEST nvmf_connect_disconnect 00:26:47.517 ************************************ 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:47.775 ************************************ 00:26:47.775 START TEST nvmf_multitarget 00:26:47.775 ************************************ 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:26:47.775 * Looking for test storage... 00:26:47.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.775 --rc genhtml_branch_coverage=1 00:26:47.775 --rc genhtml_function_coverage=1 00:26:47.775 --rc genhtml_legend=1 00:26:47.775 --rc geninfo_all_blocks=1 00:26:47.775 --rc geninfo_unexecuted_blocks=1 00:26:47.775 00:26:47.775 ' 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.775 --rc genhtml_branch_coverage=1 00:26:47.775 --rc genhtml_function_coverage=1 00:26:47.775 --rc genhtml_legend=1 00:26:47.775 --rc geninfo_all_blocks=1 00:26:47.775 --rc geninfo_unexecuted_blocks=1 00:26:47.775 00:26:47.775 ' 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.775 --rc genhtml_branch_coverage=1 00:26:47.775 --rc genhtml_function_coverage=1 00:26:47.775 --rc genhtml_legend=1 00:26:47.775 --rc geninfo_all_blocks=1 00:26:47.775 --rc geninfo_unexecuted_blocks=1 00:26:47.775 00:26:47.775 ' 00:26:47.775 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:47.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.775 --rc genhtml_branch_coverage=1 00:26:47.775 --rc genhtml_function_coverage=1 00:26:47.775 --rc genhtml_legend=1 00:26:47.775 --rc geninfo_all_blocks=1 00:26:47.775 --rc geninfo_unexecuted_blocks=1 00:26:47.775 00:26:47.775 ' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:47.776 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@223 -- # create_target_ns 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:47.776 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # return 0 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.035 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up target0 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:48.036 10.0.0.1 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:48.036 10.0.0.2 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:48.036 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up target1 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:48.037 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772163 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:48.297 10.0.0.3 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772164 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:48.297 10.0.0.4 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.297 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:48.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:26:48.298 00:26:48.298 --- 10.0.0.1 ping statistics --- 00:26:48.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.298 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target0 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:48.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:26:48.298 00:26:48.298 --- 10.0.0.2 ping statistics --- 00:26:48.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.298 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:48.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:48.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:26:48.298 00:26:48.298 --- 10.0.0.3 ping statistics --- 00:26:48.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.298 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:26:48.298 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:48.299 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:48.299 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:26:48.299 00:26:48.299 --- 10.0.0.4 ping statistics --- 00:26:48.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.299 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # return 0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target0 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target1 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:48.299 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:48.559 ' 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=73789 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 73789 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73789 ']' 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.559 11:10:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:26:48.559 [2024-12-05 11:10:13.035064] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:48.559 [2024-12-05 11:10:13.035152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.559 [2024-12-05 11:10:13.188830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.818 [2024-12-05 11:10:13.257061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.818 [2024-12-05 11:10:13.257358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.818 [2024-12-05 11:10:13.257561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.818 [2024-12-05 11:10:13.257719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.818 [2024-12-05 11:10:13.257770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.818 [2024-12-05 11:10:13.259140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.818 [2024-12-05 11:10:13.259429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.818 [2024-12-05 11:10:13.259229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.818 [2024-12-05 11:10:13.259316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:26:48.818 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:26:49.076 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:26:49.077 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:26:49.335 "nvmf_tgt_1" 00:26:49.335 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:26:49.335 "nvmf_tgt_2" 00:26:49.335 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:26:49.335 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:26:49.594 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:26:49.594 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:26:49.594 true 00:26:49.594 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:26:49.855 true 00:26:49.855 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:26:49.855 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:26:49.855 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:26:49.855 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:26:49.855 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:26:49.855 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:49.855 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:50.112 rmmod nvme_tcp 00:26:50.112 rmmod nvme_fabrics 00:26:50.112 rmmod nvme_keyring 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 73789 ']' 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 73789 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73789 ']' 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73789 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73789 00:26:50.112 killing process with pid 73789 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73789' 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73789 00:26:50.112 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73789 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@254 -- # local dev 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # continue 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # continue 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:26:50.370 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@274 -- # iptr 00:26:50.370 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-save 00:26:50.370 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-restore 00:26:50.370 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:50.370 00:26:50.370 real 0m2.825s 00:26:50.370 user 0m7.670s 00:26:50.370 sys 0m0.883s 00:26:50.370 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.370 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:26:50.370 ************************************ 00:26:50.370 END TEST nvmf_multitarget 00:26:50.370 ************************************ 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:50.628 ************************************ 00:26:50.628 START TEST nvmf_rpc 00:26:50.628 ************************************ 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:26:50.628 * Looking for test storage... 00:26:50.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:50.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.628 --rc genhtml_branch_coverage=1 00:26:50.628 --rc genhtml_function_coverage=1 00:26:50.628 --rc genhtml_legend=1 00:26:50.628 --rc geninfo_all_blocks=1 00:26:50.628 --rc geninfo_unexecuted_blocks=1 00:26:50.628 00:26:50.628 ' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:50.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.628 --rc genhtml_branch_coverage=1 00:26:50.628 --rc genhtml_function_coverage=1 00:26:50.628 --rc genhtml_legend=1 00:26:50.628 --rc geninfo_all_blocks=1 00:26:50.628 --rc geninfo_unexecuted_blocks=1 00:26:50.628 00:26:50.628 ' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:50.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.628 --rc genhtml_branch_coverage=1 00:26:50.628 --rc genhtml_function_coverage=1 00:26:50.628 --rc genhtml_legend=1 00:26:50.628 --rc geninfo_all_blocks=1 00:26:50.628 --rc geninfo_unexecuted_blocks=1 00:26:50.628 00:26:50.628 ' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:50.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.628 --rc genhtml_branch_coverage=1 00:26:50.628 --rc genhtml_function_coverage=1 00:26:50.628 --rc genhtml_legend=1 00:26:50.628 --rc geninfo_all_blocks=1 00:26:50.628 --rc geninfo_unexecuted_blocks=1 00:26:50.628 00:26:50.628 ' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.628 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:50.629 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@223 -- # create_target_ns 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # return 0 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:50.629 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up target0 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:50.888 10.0.0.1 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:26:50.888 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:50.889 10.0.0.2 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up target1 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772163 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:50.889 10.0.0.3 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772164 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:50.889 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:50.889 10.0.0.4 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:50.890 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:51.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:26:51.149 00:26:51.149 --- 10.0.0.1 ping statistics --- 00:26:51.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.149 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target0 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:51.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:26:51.149 00:26:51.149 --- 10.0.0.2 ping statistics --- 00:26:51.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.149 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:51.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:51.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:26:51.149 00:26:51.149 --- 10.0.0.3 ping statistics --- 00:26:51.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.149 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:51.149 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:51.150 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:51.150 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:26:51.150 00:26:51.150 --- 10.0.0.4 ping statistics --- 00:26:51.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.150 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # return 0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target0 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target1 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:51.150 ' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=74062 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 74062 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 74062 ']' 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.150 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.151 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.151 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.151 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:51.408 [2024-12-05 11:10:15.813915] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:51.408 [2024-12-05 11:10:15.813995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.408 [2024-12-05 11:10:15.967699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.408 [2024-12-05 11:10:16.033742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.408 [2024-12-05 11:10:16.033809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.408 [2024-12-05 11:10:16.033824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.408 [2024-12-05 11:10:16.033839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.408 [2024-12-05 11:10:16.033850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.408 [2024-12-05 11:10:16.035022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.408 [2024-12-05 11:10:16.035207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.408 [2024-12-05 11:10:16.035269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.408 [2024-12-05 11:10:16.035274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.666 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:26:51.666 "poll_groups": [ 00:26:51.666 { 00:26:51.666 "admin_qpairs": 0, 00:26:51.666 "completed_nvme_io": 0, 00:26:51.666 "current_admin_qpairs": 0, 00:26:51.666 "current_io_qpairs": 0, 00:26:51.666 "io_qpairs": 0, 00:26:51.666 "name": "nvmf_tgt_poll_group_000", 00:26:51.666 "pending_bdev_io": 0, 00:26:51.666 "transports": [] 00:26:51.666 }, 00:26:51.666 { 00:26:51.666 "admin_qpairs": 0, 00:26:51.666 "completed_nvme_io": 0, 00:26:51.666 "current_admin_qpairs": 0, 00:26:51.666 "current_io_qpairs": 0, 00:26:51.666 "io_qpairs": 0, 00:26:51.666 "name": "nvmf_tgt_poll_group_001", 00:26:51.666 "pending_bdev_io": 0, 00:26:51.666 "transports": [] 00:26:51.666 }, 00:26:51.666 { 00:26:51.666 "admin_qpairs": 0, 00:26:51.666 "completed_nvme_io": 0, 00:26:51.666 "current_admin_qpairs": 0, 00:26:51.666 "current_io_qpairs": 0, 00:26:51.666 "io_qpairs": 0, 00:26:51.667 "name": "nvmf_tgt_poll_group_002", 00:26:51.667 "pending_bdev_io": 0, 00:26:51.667 "transports": [] 00:26:51.667 }, 00:26:51.667 { 00:26:51.667 "admin_qpairs": 0, 00:26:51.667 "completed_nvme_io": 0, 00:26:51.667 "current_admin_qpairs": 0, 00:26:51.667 "current_io_qpairs": 0, 00:26:51.667 "io_qpairs": 0, 00:26:51.667 "name": "nvmf_tgt_poll_group_003", 00:26:51.667 "pending_bdev_io": 0, 00:26:51.667 "transports": [] 00:26:51.667 } 00:26:51.667 ], 00:26:51.667 "tick_rate": 2100000000 00:26:51.667 }' 00:26:51.667 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:26:51.667 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:26:51.667 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:26:51.667 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:26:51.667 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:26:51.667 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.924 [2024-12-05 11:10:16.335450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.924 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:26:51.924 "poll_groups": [ 00:26:51.924 { 00:26:51.924 "admin_qpairs": 0, 00:26:51.924 "completed_nvme_io": 0, 00:26:51.924 "current_admin_qpairs": 0, 00:26:51.924 "current_io_qpairs": 0, 00:26:51.924 "io_qpairs": 0, 00:26:51.924 "name": "nvmf_tgt_poll_group_000", 00:26:51.924 "pending_bdev_io": 0, 00:26:51.924 "transports": [ 00:26:51.924 { 00:26:51.924 "trtype": "TCP" 00:26:51.924 } 00:26:51.924 ] 00:26:51.924 }, 00:26:51.924 { 00:26:51.924 "admin_qpairs": 0, 00:26:51.924 "completed_nvme_io": 0, 00:26:51.924 "current_admin_qpairs": 0, 00:26:51.924 "current_io_qpairs": 0, 00:26:51.924 "io_qpairs": 0, 00:26:51.925 "name": "nvmf_tgt_poll_group_001", 00:26:51.925 "pending_bdev_io": 0, 00:26:51.925 "transports": [ 00:26:51.925 { 00:26:51.925 "trtype": "TCP" 00:26:51.925 } 00:26:51.925 ] 00:26:51.925 }, 00:26:51.925 { 00:26:51.925 "admin_qpairs": 0, 00:26:51.925 "completed_nvme_io": 0, 00:26:51.925 "current_admin_qpairs": 0, 00:26:51.925 "current_io_qpairs": 0, 00:26:51.925 "io_qpairs": 0, 00:26:51.925 "name": "nvmf_tgt_poll_group_002", 00:26:51.925 "pending_bdev_io": 0, 00:26:51.925 "transports": [ 00:26:51.925 { 00:26:51.925 "trtype": "TCP" 00:26:51.925 } 00:26:51.925 ] 00:26:51.925 }, 00:26:51.925 { 00:26:51.925 "admin_qpairs": 0, 00:26:51.925 "completed_nvme_io": 0, 00:26:51.925 "current_admin_qpairs": 0, 00:26:51.925 "current_io_qpairs": 0, 00:26:51.925 "io_qpairs": 0, 00:26:51.925 "name": "nvmf_tgt_poll_group_003", 00:26:51.925 "pending_bdev_io": 0, 00:26:51.925 "transports": [ 00:26:51.925 { 00:26:51.925 "trtype": "TCP" 00:26:51.925 } 00:26:51.925 ] 00:26:51.925 } 00:26:51.925 ], 00:26:51.925 "tick_rate": 2100000000 00:26:51.925 }' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.925 Malloc1 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.925 [2024-12-05 11:10:16.513715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -a 10.0.0.2 -s 4420 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -a 10.0.0.2 -s 4420 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -a 10.0.0.2 -s 4420 00:26:51.925 [2024-12-05 11:10:16.542417] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6' 00:26:51.925 Failed to write to /dev/nvme-fabrics: Input/output error 00:26:51.925 could not add new controller: failed to write to nvme-fabrics device 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.925 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:52.253 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:26:52.253 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:52.253 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:52.253 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:52.253 11:10:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:54.156 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:54.156 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:54.156 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:54.156 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:54.156 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:54.157 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:54.157 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:54.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:54.414 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:54.414 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.414 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.414 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:54.414 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.414 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:54.415 [2024-12-05 11:10:18.953390] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6' 00:26:54.415 Failed to write to /dev/nvme-fabrics: Input/output error 00:26:54.415 could not add new controller: failed to write to nvme-fabrics device 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.415 11:10:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:54.674 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:26:54.674 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:54.674 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:54.674 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:54.674 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:56.572 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:56.572 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:56.572 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:56.572 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:56.572 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:56.572 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:56.573 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:56.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:56.830 [2024-12-05 11:10:21.365270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.830 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:57.089 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:26:57.089 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:57.089 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.089 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:57.089 11:10:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:26:58.988 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:58.988 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:58.988 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:58.988 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:58.988 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:58.988 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:26:58.989 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:58.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:58.989 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:58.989 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:26:58.989 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:58.989 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:59.248 [2024-12-05 11:10:23.692950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:59.248 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:27:01.799 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:01.799 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:01.799 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:01.799 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:01.799 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:01.799 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:27:01.799 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:01.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:01.799 [2024-12-05 11:10:26.116713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.799 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:01.800 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:27:03.704 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:03.704 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:03.704 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:03.704 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:03.704 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.704 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:27:03.704 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:03.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.963 [2024-12-05 11:10:28.456429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.963 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:04.220 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:04.220 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:27:04.220 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.221 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:04.221 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:06.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.122 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:06.380 [2024-12-05 11:10:30.792085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:06.380 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:08.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 [2024-12-05 11:10:33.139765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 [2024-12-05 11:10:33.187697] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.918 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 [2024-12-05 11:10:33.235786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 [2024-12-05 11:10:33.283814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 [2024-12-05 11:10:33.331870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:27:08.919 "poll_groups": [ 00:27:08.919 { 00:27:08.919 "admin_qpairs": 2, 00:27:08.919 "completed_nvme_io": 66, 00:27:08.919 "current_admin_qpairs": 0, 00:27:08.919 "current_io_qpairs": 0, 00:27:08.919 "io_qpairs": 16, 00:27:08.919 "name": "nvmf_tgt_poll_group_000", 00:27:08.919 "pending_bdev_io": 0, 00:27:08.919 "transports": [ 00:27:08.919 { 00:27:08.919 "trtype": "TCP" 00:27:08.919 } 00:27:08.919 ] 00:27:08.919 }, 00:27:08.919 { 00:27:08.919 "admin_qpairs": 3, 00:27:08.919 "completed_nvme_io": 119, 00:27:08.919 "current_admin_qpairs": 0, 00:27:08.919 "current_io_qpairs": 0, 00:27:08.919 "io_qpairs": 17, 00:27:08.919 "name": "nvmf_tgt_poll_group_001", 00:27:08.919 "pending_bdev_io": 0, 00:27:08.919 "transports": [ 00:27:08.919 { 00:27:08.919 "trtype": "TCP" 00:27:08.919 } 00:27:08.919 ] 00:27:08.919 }, 00:27:08.919 { 00:27:08.919 "admin_qpairs": 1, 00:27:08.919 "completed_nvme_io": 166, 00:27:08.919 "current_admin_qpairs": 0, 00:27:08.919 "current_io_qpairs": 0, 00:27:08.919 "io_qpairs": 19, 00:27:08.919 "name": "nvmf_tgt_poll_group_002", 00:27:08.919 "pending_bdev_io": 0, 00:27:08.919 "transports": [ 00:27:08.919 { 00:27:08.919 "trtype": "TCP" 00:27:08.919 } 00:27:08.919 ] 00:27:08.919 }, 00:27:08.919 { 00:27:08.919 "admin_qpairs": 1, 00:27:08.919 "completed_nvme_io": 69, 00:27:08.919 "current_admin_qpairs": 0, 00:27:08.919 "current_io_qpairs": 0, 00:27:08.919 "io_qpairs": 18, 00:27:08.919 "name": "nvmf_tgt_poll_group_003", 00:27:08.919 "pending_bdev_io": 0, 00:27:08.919 "transports": [ 00:27:08.919 { 00:27:08.919 "trtype": "TCP" 00:27:08.919 } 00:27:08.919 ] 00:27:08.919 } 00:27:08.919 ], 00:27:08.919 "tick_rate": 2100000000 00:27:08.919 }' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:08.919 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:27:08.920 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:08.920 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:08.920 rmmod nvme_tcp 00:27:08.920 rmmod nvme_fabrics 00:27:08.920 rmmod nvme_keyring 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 74062 ']' 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 74062 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 74062 ']' 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 74062 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74062 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:09.179 killing process with pid 74062 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74062' 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 74062 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 74062 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@254 -- # local dev 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:09.179 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # continue 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # continue 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@274 -- # iptr 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-save 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-restore 00:27:09.439 00:27:09.439 real 0m18.937s 00:27:09.439 user 1m8.534s 00:27:09.439 sys 0m4.240s 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.439 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:09.439 ************************************ 00:27:09.439 END TEST nvmf_rpc 00:27:09.439 ************************************ 00:27:09.439 11:10:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:27:09.439 11:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:09.439 11:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.439 11:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:09.439 ************************************ 00:27:09.439 START TEST nvmf_invalid 00:27:09.439 ************************************ 00:27:09.439 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:27:09.699 * Looking for test storage... 00:27:09.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.699 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:09.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.699 --rc genhtml_branch_coverage=1 00:27:09.700 --rc genhtml_function_coverage=1 00:27:09.700 --rc genhtml_legend=1 00:27:09.700 --rc geninfo_all_blocks=1 00:27:09.700 --rc geninfo_unexecuted_blocks=1 00:27:09.700 00:27:09.700 ' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:09.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.700 --rc genhtml_branch_coverage=1 00:27:09.700 --rc genhtml_function_coverage=1 00:27:09.700 --rc genhtml_legend=1 00:27:09.700 --rc geninfo_all_blocks=1 00:27:09.700 --rc geninfo_unexecuted_blocks=1 00:27:09.700 00:27:09.700 ' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:09.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.700 --rc genhtml_branch_coverage=1 00:27:09.700 --rc genhtml_function_coverage=1 00:27:09.700 --rc genhtml_legend=1 00:27:09.700 --rc geninfo_all_blocks=1 00:27:09.700 --rc geninfo_unexecuted_blocks=1 00:27:09.700 00:27:09.700 ' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:09.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.700 --rc genhtml_branch_coverage=1 00:27:09.700 --rc genhtml_function_coverage=1 00:27:09.700 --rc genhtml_legend=1 00:27:09.700 --rc geninfo_all_blocks=1 00:27:09.700 --rc geninfo_unexecuted_blocks=1 00:27:09.700 00:27:09.700 ' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:09.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@223 -- # create_target_ns 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:09.700 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # return 0 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up target0 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:09.701 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:09.960 10.0.0.1 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:27:09.960 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:09.961 10.0.0.2 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up target1 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772163 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:09.961 10.0.0.3 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772164 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:09.961 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:09.961 10.0.0.4 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:09.962 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator0 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:10.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:27:10.221 00:27:10.221 --- 10.0.0.1 ping statistics --- 00:27:10.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.221 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.221 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:10.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:27:10.222 00:27:10.222 --- 10.0.0.2 ping statistics --- 00:27:10.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.222 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:10.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:10.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:27:10.222 00:27:10.222 --- 10.0.0.3 ping statistics --- 00:27:10.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.222 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:10.222 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:10.222 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:27:10.222 00:27:10.222 --- 10.0.0.4 ping statistics --- 00:27:10.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.222 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # return 0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:10.222 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target0 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target0 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target1 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target1 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:10.223 ' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=74625 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 74625 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74625 ']' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.223 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:10.481 [2024-12-05 11:10:34.916276] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:10.481 [2024-12-05 11:10:34.916384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.481 [2024-12-05 11:10:35.077651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.738 [2024-12-05 11:10:35.143891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.738 [2024-12-05 11:10:35.143982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.738 [2024-12-05 11:10:35.143998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.738 [2024-12-05 11:10:35.144011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.738 [2024-12-05 11:10:35.144023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.738 [2024-12-05 11:10:35.145254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.738 [2024-12-05 11:10:35.145309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.738 [2024-12-05 11:10:35.145405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.738 [2024-12-05 11:10:35.145402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:11.302 11:10:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15558 00:27:11.559 [2024-12-05 11:10:36.171951] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:27:11.559 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/05 11:10:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15558 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:27:11.559 request: 00:27:11.559 { 00:27:11.559 "method": "nvmf_create_subsystem", 00:27:11.559 "params": { 00:27:11.559 "nqn": "nqn.2016-06.io.spdk:cnode15558", 00:27:11.559 "tgt_name": "foobar" 00:27:11.559 } 00:27:11.559 } 00:27:11.559 Got JSON-RPC error response 00:27:11.559 GoRPCClient: error on JSON-RPC call' 00:27:11.559 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/05 11:10:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15558 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:27:11.559 request: 00:27:11.559 { 00:27:11.559 "method": "nvmf_create_subsystem", 00:27:11.559 "params": { 00:27:11.559 "nqn": "nqn.2016-06.io.spdk:cnode15558", 00:27:11.559 "tgt_name": "foobar" 00:27:11.559 } 00:27:11.559 } 00:27:11.559 Got JSON-RPC error response 00:27:11.559 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:27:11.559 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:27:11.559 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8856 00:27:11.817 [2024-12-05 11:10:36.452258] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8856: invalid serial number 'SPDKISFASTANDAWESOME' 00:27:12.076 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/05 11:10:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8856 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:27:12.076 request: 00:27:12.076 { 00:27:12.076 "method": "nvmf_create_subsystem", 00:27:12.076 "params": { 00:27:12.076 "nqn": "nqn.2016-06.io.spdk:cnode8856", 00:27:12.076 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:27:12.076 } 00:27:12.076 } 00:27:12.076 Got JSON-RPC error response 00:27:12.076 GoRPCClient: error on JSON-RPC call' 00:27:12.076 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/05 11:10:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8856 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:27:12.076 request: 00:27:12.076 { 00:27:12.076 "method": "nvmf_create_subsystem", 00:27:12.076 "params": { 00:27:12.076 "nqn": "nqn.2016-06.io.spdk:cnode8856", 00:27:12.076 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:27:12.076 } 00:27:12.076 } 00:27:12.076 Got JSON-RPC error response 00:27:12.076 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:27:12.076 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:27:12.076 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17654 00:27:12.335 [2024-12-05 11:10:36.756568] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17654: invalid model number 'SPDK_Controller' 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/05 11:10:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17654], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:27:12.335 request: 00:27:12.335 { 00:27:12.335 "method": "nvmf_create_subsystem", 00:27:12.335 "params": { 00:27:12.335 "nqn": "nqn.2016-06.io.spdk:cnode17654", 00:27:12.335 "model_number": "SPDK_Controller\u001f" 00:27:12.335 } 00:27:12.335 } 00:27:12.335 Got JSON-RPC error response 00:27:12.335 GoRPCClient: error on JSON-RPC call' 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/05 11:10:36 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17654], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:27:12.335 request: 00:27:12.335 { 00:27:12.335 "method": "nvmf_create_subsystem", 00:27:12.335 "params": { 00:27:12.335 "nqn": "nqn.2016-06.io.spdk:cnode17654", 00:27:12.335 "model_number": "SPDK_Controller\u001f" 00:27:12.335 } 00:27:12.335 } 00:27:12.335 Got JSON-RPC error response 00:27:12.335 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.335 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm[OU3piBn$QNNTZIuKmY(' 00:27:12.336 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'm[OU3piBn$QNNTZIuKmY(' nqn.2016-06.io.spdk:cnode14369 00:27:12.595 [2024-12-05 11:10:37.220994] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14369: invalid serial number 'm[OU3piBn$QNNTZIuKmY(' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/05 11:10:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14369 serial_number:m[OU3piBn$QNNTZIuKmY(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN m[OU3piBn$QNNTZIuKmY( 00:27:12.855 request: 00:27:12.855 { 00:27:12.855 "method": "nvmf_create_subsystem", 00:27:12.855 "params": { 00:27:12.855 "nqn": "nqn.2016-06.io.spdk:cnode14369", 00:27:12.855 "serial_number": "m[OU3piBn$QNNTZIuKmY(" 00:27:12.855 } 00:27:12.855 } 00:27:12.855 Got JSON-RPC error response 00:27:12.855 GoRPCClient: error on JSON-RPC call' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/05 11:10:37 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14369 serial_number:m[OU3piBn$QNNTZIuKmY(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN m[OU3piBn$QNNTZIuKmY( 00:27:12.855 request: 00:27:12.855 { 00:27:12.855 "method": "nvmf_create_subsystem", 00:27:12.855 "params": { 00:27:12.855 "nqn": "nqn.2016-06.io.spdk:cnode14369", 00:27:12.855 "serial_number": "m[OU3piBn$QNNTZIuKmY(" 00:27:12.855 } 00:27:12.855 } 00:27:12.855 Got JSON-RPC error response 00:27:12.855 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:27:12.855 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:27:12.856 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'C4mk=kdYranKXs(bpfzg#]&VI=/szN_ /dev/null' 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:16.288 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:16.547 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # continue 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # continue 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@274 -- # iptr 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-save 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-restore 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:16.547 ************************************ 00:27:16.547 END TEST nvmf_invalid 00:27:16.547 ************************************ 00:27:16.547 00:27:16.547 real 0m6.990s 00:27:16.547 user 0m26.878s 00:27:16.547 sys 0m1.722s 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:16.547 ************************************ 00:27:16.547 START TEST nvmf_connect_stress 00:27:16.547 ************************************ 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:27:16.547 * Looking for test storage... 00:27:16.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:16.547 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:16.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.808 --rc genhtml_branch_coverage=1 00:27:16.808 --rc genhtml_function_coverage=1 00:27:16.808 --rc genhtml_legend=1 00:27:16.808 --rc geninfo_all_blocks=1 00:27:16.808 --rc geninfo_unexecuted_blocks=1 00:27:16.808 00:27:16.808 ' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:16.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.808 --rc genhtml_branch_coverage=1 00:27:16.808 --rc genhtml_function_coverage=1 00:27:16.808 --rc genhtml_legend=1 00:27:16.808 --rc geninfo_all_blocks=1 00:27:16.808 --rc geninfo_unexecuted_blocks=1 00:27:16.808 00:27:16.808 ' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:16.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.808 --rc genhtml_branch_coverage=1 00:27:16.808 --rc genhtml_function_coverage=1 00:27:16.808 --rc genhtml_legend=1 00:27:16.808 --rc geninfo_all_blocks=1 00:27:16.808 --rc geninfo_unexecuted_blocks=1 00:27:16.808 00:27:16.808 ' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:16.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.808 --rc genhtml_branch_coverage=1 00:27:16.808 --rc genhtml_function_coverage=1 00:27:16.808 --rc genhtml_legend=1 00:27:16.808 --rc geninfo_all_blocks=1 00:27:16.808 --rc geninfo_unexecuted_blocks=1 00:27:16.808 00:27:16.808 ' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.808 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:16.809 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@223 -- # create_target_ns 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # return 0 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up target0 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:16.809 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:16.810 10.0.0.1 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:16.810 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:17.069 10.0.0.2 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up target1 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:17.069 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772163 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:17.070 10.0.0.3 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772164 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:17.070 10.0.0.4 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:17.070 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:17.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:27:17.331 00:27:17.331 --- 10.0.0.1 ping statistics --- 00:27:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.331 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:17.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:27:17.331 00:27:17.331 --- 10.0.0.2 ping statistics --- 00:27:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.331 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:17.331 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:17.331 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:27:17.331 00:27:17.331 --- 10.0.0.3 ping statistics --- 00:27:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.331 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:17.331 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:17.331 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:27:17.331 00:27:17.331 --- 10.0.0.4 ping statistics --- 00:27:17.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.331 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # return 0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:17.331 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target0 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target0 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target1 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:17.332 ' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=75191 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 75191 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 75191 ']' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.332 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:17.332 [2024-12-05 11:10:41.949900] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:17.332 [2024-12-05 11:10:41.950002] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.591 [2024-12-05 11:10:42.107926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:17.591 [2024-12-05 11:10:42.171792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.591 [2024-12-05 11:10:42.171850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.591 [2024-12-05 11:10:42.171866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.591 [2024-12-05 11:10:42.171878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.591 [2024-12-05 11:10:42.171890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.591 [2024-12-05 11:10:42.172976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.591 [2024-12-05 11:10:42.173353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.591 [2024-12-05 11:10:42.173358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:17.849 [2024-12-05 11:10:42.341047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:17.849 [2024-12-05 11:10:42.363470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:17.849 NULL1 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75224 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.849 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:18.413 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.413 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:18.414 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:18.414 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.414 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:18.671 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.671 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:18.671 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:18.671 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.671 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:18.929 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.929 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:18.929 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:18.929 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.929 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:19.188 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.188 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:19.188 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:19.188 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.188 11:10:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:19.445 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.445 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:19.445 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:19.445 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.445 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:20.011 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.011 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:20.011 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:20.011 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.011 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:20.269 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.269 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:20.269 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:20.269 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.269 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:20.526 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.526 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:20.526 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:20.526 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.526 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:20.783 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.783 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:20.783 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:20.783 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.783 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:21.041 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.041 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:21.041 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:21.041 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.041 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:21.606 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.606 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:21.606 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:21.606 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.606 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:21.864 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.864 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:21.864 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:21.864 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.864 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:22.122 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.122 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:22.122 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:22.122 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.122 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:22.380 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.380 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:22.380 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:22.380 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.380 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:22.947 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.947 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:22.947 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:22.947 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.947 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:23.206 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.206 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:23.206 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:23.206 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.206 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:23.465 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.465 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:23.465 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:23.465 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.465 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:23.723 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.723 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:23.723 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:23.723 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.723 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:23.982 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.982 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:23.982 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:23.982 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.982 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:24.549 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.549 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:24.549 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:24.549 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.549 11:10:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:24.885 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.885 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:24.885 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:24.885 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.885 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:25.146 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.146 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:25.146 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:25.146 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.146 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:25.405 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.405 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:25.405 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:25.405 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.405 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:25.663 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.663 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:25.663 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:25.663 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.663 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:25.921 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.921 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:25.921 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:25.921 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.921 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:26.487 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.487 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:26.487 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:26.487 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.487 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:26.744 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.744 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:26.744 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:26.744 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.744 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:27.002 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.002 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:27.002 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:27.002 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.002 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:27.260 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.260 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:27.260 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:27.260 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.260 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:27.518 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.518 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:27.518 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:27.518 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.518 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:28.083 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.083 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:28.083 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:27:28.083 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.083 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:28.083 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75224 00:27:28.342 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75224) - No such process 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75224 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:28.342 rmmod nvme_tcp 00:27:28.342 rmmod nvme_fabrics 00:27:28.342 rmmod nvme_keyring 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 75191 ']' 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 75191 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 75191 ']' 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 75191 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75191 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.342 killing process with pid 75191 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75191' 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 75191 00:27:28.342 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 75191 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@254 -- # local dev 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:28.601 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # continue 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # continue 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@274 -- # iptr 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-save 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-restore 00:27:28.860 00:27:28.860 real 0m12.219s 00:27:28.860 user 0m38.141s 00:27:28.860 sys 0m4.903s 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:27:28.860 ************************************ 00:27:28.860 END TEST nvmf_connect_stress 00:27:28.860 ************************************ 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:28.860 ************************************ 00:27:28.860 START TEST nvmf_fused_ordering 00:27:28.860 ************************************ 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:27:28.860 * Looking for test storage... 00:27:28.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:28.860 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.121 --rc genhtml_branch_coverage=1 00:27:29.121 --rc genhtml_function_coverage=1 00:27:29.121 --rc genhtml_legend=1 00:27:29.121 --rc geninfo_all_blocks=1 00:27:29.121 --rc geninfo_unexecuted_blocks=1 00:27:29.121 00:27:29.121 ' 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.121 --rc genhtml_branch_coverage=1 00:27:29.121 --rc genhtml_function_coverage=1 00:27:29.121 --rc genhtml_legend=1 00:27:29.121 --rc geninfo_all_blocks=1 00:27:29.121 --rc geninfo_unexecuted_blocks=1 00:27:29.121 00:27:29.121 ' 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.121 --rc genhtml_branch_coverage=1 00:27:29.121 --rc genhtml_function_coverage=1 00:27:29.121 --rc genhtml_legend=1 00:27:29.121 --rc geninfo_all_blocks=1 00:27:29.121 --rc geninfo_unexecuted_blocks=1 00:27:29.121 00:27:29.121 ' 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:29.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.121 --rc genhtml_branch_coverage=1 00:27:29.121 --rc genhtml_function_coverage=1 00:27:29.121 --rc genhtml_legend=1 00:27:29.121 --rc geninfo_all_blocks=1 00:27:29.121 --rc geninfo_unexecuted_blocks=1 00:27:29.121 00:27:29.121 ' 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:27:29.121 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:29.122 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@223 -- # create_target_ns 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # return 0 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up target0 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:29.122 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:29.123 10.0.0.1 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:29.123 10.0.0.2 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:29.123 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up target1 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772163 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:29.384 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:29.384 10.0.0.3 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772164 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:29.385 10.0.0.4 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator0 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:29.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:27:29.385 00:27:29.385 --- 10.0.0.1 ping statistics --- 00:27:29.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.385 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:27:29.385 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:29.386 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:29.386 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:29.386 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.386 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.386 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target0 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target0 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:29.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:27:29.386 00:27:29.386 --- 10.0.0.2 ping statistics --- 00:27:29.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.386 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator1 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:29.386 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:29.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:29.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:27:29.646 00:27:29.646 --- 10.0.0.3 ping statistics --- 00:27:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.646 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target1 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target1 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:29.646 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:29.646 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:29.646 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.118 ms 00:27:29.646 00:27:29.647 --- 10.0.0.4 ping statistics --- 00:27:29.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.647 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # return 0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target0 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target1 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:29.647 ' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=75611 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 75611 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75611 ']' 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.647 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:29.647 [2024-12-05 11:10:54.267152] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:29.647 [2024-12-05 11:10:54.267292] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.907 [2024-12-05 11:10:54.436918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.907 [2024-12-05 11:10:54.497261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.907 [2024-12-05 11:10:54.497325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.907 [2024-12-05 11:10:54.497342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.907 [2024-12-05 11:10:54.497355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.907 [2024-12-05 11:10:54.497367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.907 [2024-12-05 11:10:54.497763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:30.166 [2024-12-05 11:10:54.673650] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:30.166 [2024-12-05 11:10:54.694084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:30.166 NULL1 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.166 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:30.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:30.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:30.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.167 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:30.167 [2024-12-05 11:10:54.749461] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:30.167 [2024-12-05 11:10:54.749523] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75646 ] 00:27:30.734 Attached to nqn.2016-06.io.spdk:cnode1 00:27:30.734 Namespace ID: 1 size: 1GB 00:27:30.734 fused_ordering(0) 00:27:30.734 fused_ordering(1) 00:27:30.734 fused_ordering(2) 00:27:30.734 fused_ordering(3) 00:27:30.734 fused_ordering(4) 00:27:30.734 fused_ordering(5) 00:27:30.734 fused_ordering(6) 00:27:30.734 fused_ordering(7) 00:27:30.734 fused_ordering(8) 00:27:30.734 fused_ordering(9) 00:27:30.734 fused_ordering(10) 00:27:30.734 fused_ordering(11) 00:27:30.734 fused_ordering(12) 00:27:30.734 fused_ordering(13) 00:27:30.734 fused_ordering(14) 00:27:30.734 fused_ordering(15) 00:27:30.734 fused_ordering(16) 00:27:30.734 fused_ordering(17) 00:27:30.734 fused_ordering(18) 00:27:30.734 fused_ordering(19) 00:27:30.734 fused_ordering(20) 00:27:30.734 fused_ordering(21) 00:27:30.734 fused_ordering(22) 00:27:30.734 fused_ordering(23) 00:27:30.734 fused_ordering(24) 00:27:30.734 fused_ordering(25) 00:27:30.734 fused_ordering(26) 00:27:30.734 fused_ordering(27) 00:27:30.734 fused_ordering(28) 00:27:30.734 fused_ordering(29) 00:27:30.734 fused_ordering(30) 00:27:30.734 fused_ordering(31) 00:27:30.734 fused_ordering(32) 00:27:30.734 fused_ordering(33) 00:27:30.734 fused_ordering(34) 00:27:30.734 fused_ordering(35) 00:27:30.734 fused_ordering(36) 00:27:30.734 fused_ordering(37) 00:27:30.734 fused_ordering(38) 00:27:30.734 fused_ordering(39) 00:27:30.734 fused_ordering(40) 00:27:30.734 fused_ordering(41) 00:27:30.734 fused_ordering(42) 00:27:30.734 fused_ordering(43) 00:27:30.734 fused_ordering(44) 00:27:30.734 fused_ordering(45) 00:27:30.734 fused_ordering(46) 00:27:30.734 fused_ordering(47) 00:27:30.734 fused_ordering(48) 00:27:30.734 fused_ordering(49) 00:27:30.734 fused_ordering(50) 00:27:30.734 fused_ordering(51) 00:27:30.734 fused_ordering(52) 00:27:30.734 fused_ordering(53) 00:27:30.734 fused_ordering(54) 00:27:30.734 fused_ordering(55) 00:27:30.734 fused_ordering(56) 00:27:30.734 fused_ordering(57) 00:27:30.734 fused_ordering(58) 00:27:30.734 fused_ordering(59) 00:27:30.734 fused_ordering(60) 00:27:30.734 fused_ordering(61) 00:27:30.734 fused_ordering(62) 00:27:30.734 fused_ordering(63) 00:27:30.734 fused_ordering(64) 00:27:30.734 fused_ordering(65) 00:27:30.734 fused_ordering(66) 00:27:30.734 fused_ordering(67) 00:27:30.734 fused_ordering(68) 00:27:30.734 fused_ordering(69) 00:27:30.734 fused_ordering(70) 00:27:30.734 fused_ordering(71) 00:27:30.734 fused_ordering(72) 00:27:30.734 fused_ordering(73) 00:27:30.734 fused_ordering(74) 00:27:30.734 fused_ordering(75) 00:27:30.734 fused_ordering(76) 00:27:30.734 fused_ordering(77) 00:27:30.734 fused_ordering(78) 00:27:30.734 fused_ordering(79) 00:27:30.734 fused_ordering(80) 00:27:30.734 fused_ordering(81) 00:27:30.734 fused_ordering(82) 00:27:30.734 fused_ordering(83) 00:27:30.734 fused_ordering(84) 00:27:30.734 fused_ordering(85) 00:27:30.734 fused_ordering(86) 00:27:30.734 fused_ordering(87) 00:27:30.734 fused_ordering(88) 00:27:30.734 fused_ordering(89) 00:27:30.734 fused_ordering(90) 00:27:30.734 fused_ordering(91) 00:27:30.734 fused_ordering(92) 00:27:30.734 fused_ordering(93) 00:27:30.734 fused_ordering(94) 00:27:30.734 fused_ordering(95) 00:27:30.734 fused_ordering(96) 00:27:30.734 fused_ordering(97) 00:27:30.734 fused_ordering(98) 00:27:30.734 fused_ordering(99) 00:27:30.734 fused_ordering(100) 00:27:30.734 fused_ordering(101) 00:27:30.734 fused_ordering(102) 00:27:30.734 fused_ordering(103) 00:27:30.734 fused_ordering(104) 00:27:30.734 fused_ordering(105) 00:27:30.734 fused_ordering(106) 00:27:30.734 fused_ordering(107) 00:27:30.734 fused_ordering(108) 00:27:30.734 fused_ordering(109) 00:27:30.734 fused_ordering(110) 00:27:30.734 fused_ordering(111) 00:27:30.734 fused_ordering(112) 00:27:30.734 fused_ordering(113) 00:27:30.734 fused_ordering(114) 00:27:30.734 fused_ordering(115) 00:27:30.734 fused_ordering(116) 00:27:30.734 fused_ordering(117) 00:27:30.734 fused_ordering(118) 00:27:30.734 fused_ordering(119) 00:27:30.734 fused_ordering(120) 00:27:30.734 fused_ordering(121) 00:27:30.734 fused_ordering(122) 00:27:30.734 fused_ordering(123) 00:27:30.734 fused_ordering(124) 00:27:30.734 fused_ordering(125) 00:27:30.734 fused_ordering(126) 00:27:30.734 fused_ordering(127) 00:27:30.734 fused_ordering(128) 00:27:30.734 fused_ordering(129) 00:27:30.734 fused_ordering(130) 00:27:30.734 fused_ordering(131) 00:27:30.734 fused_ordering(132) 00:27:30.734 fused_ordering(133) 00:27:30.734 fused_ordering(134) 00:27:30.734 fused_ordering(135) 00:27:30.734 fused_ordering(136) 00:27:30.734 fused_ordering(137) 00:27:30.734 fused_ordering(138) 00:27:30.734 fused_ordering(139) 00:27:30.734 fused_ordering(140) 00:27:30.734 fused_ordering(141) 00:27:30.734 fused_ordering(142) 00:27:30.734 fused_ordering(143) 00:27:30.734 fused_ordering(144) 00:27:30.734 fused_ordering(145) 00:27:30.734 fused_ordering(146) 00:27:30.734 fused_ordering(147) 00:27:30.734 fused_ordering(148) 00:27:30.734 fused_ordering(149) 00:27:30.734 fused_ordering(150) 00:27:30.734 fused_ordering(151) 00:27:30.734 fused_ordering(152) 00:27:30.734 fused_ordering(153) 00:27:30.734 fused_ordering(154) 00:27:30.734 fused_ordering(155) 00:27:30.734 fused_ordering(156) 00:27:30.734 fused_ordering(157) 00:27:30.734 fused_ordering(158) 00:27:30.734 fused_ordering(159) 00:27:30.734 fused_ordering(160) 00:27:30.734 fused_ordering(161) 00:27:30.734 fused_ordering(162) 00:27:30.734 fused_ordering(163) 00:27:30.734 fused_ordering(164) 00:27:30.734 fused_ordering(165) 00:27:30.734 fused_ordering(166) 00:27:30.734 fused_ordering(167) 00:27:30.734 fused_ordering(168) 00:27:30.734 fused_ordering(169) 00:27:30.734 fused_ordering(170) 00:27:30.734 fused_ordering(171) 00:27:30.734 fused_ordering(172) 00:27:30.734 fused_ordering(173) 00:27:30.734 fused_ordering(174) 00:27:30.734 fused_ordering(175) 00:27:30.734 fused_ordering(176) 00:27:30.734 fused_ordering(177) 00:27:30.734 fused_ordering(178) 00:27:30.734 fused_ordering(179) 00:27:30.734 fused_ordering(180) 00:27:30.734 fused_ordering(181) 00:27:30.734 fused_ordering(182) 00:27:30.734 fused_ordering(183) 00:27:30.734 fused_ordering(184) 00:27:30.734 fused_ordering(185) 00:27:30.734 fused_ordering(186) 00:27:30.734 fused_ordering(187) 00:27:30.734 fused_ordering(188) 00:27:30.734 fused_ordering(189) 00:27:30.734 fused_ordering(190) 00:27:30.734 fused_ordering(191) 00:27:30.734 fused_ordering(192) 00:27:30.734 fused_ordering(193) 00:27:30.734 fused_ordering(194) 00:27:30.734 fused_ordering(195) 00:27:30.734 fused_ordering(196) 00:27:30.734 fused_ordering(197) 00:27:30.734 fused_ordering(198) 00:27:30.734 fused_ordering(199) 00:27:30.734 fused_ordering(200) 00:27:30.734 fused_ordering(201) 00:27:30.734 fused_ordering(202) 00:27:30.734 fused_ordering(203) 00:27:30.734 fused_ordering(204) 00:27:30.734 fused_ordering(205) 00:27:30.991 fused_ordering(206) 00:27:30.991 fused_ordering(207) 00:27:30.991 fused_ordering(208) 00:27:30.991 fused_ordering(209) 00:27:30.991 fused_ordering(210) 00:27:30.991 fused_ordering(211) 00:27:30.991 fused_ordering(212) 00:27:30.991 fused_ordering(213) 00:27:30.991 fused_ordering(214) 00:27:30.991 fused_ordering(215) 00:27:30.991 fused_ordering(216) 00:27:30.991 fused_ordering(217) 00:27:30.991 fused_ordering(218) 00:27:30.992 fused_ordering(219) 00:27:30.992 fused_ordering(220) 00:27:30.992 fused_ordering(221) 00:27:30.992 fused_ordering(222) 00:27:30.992 fused_ordering(223) 00:27:30.992 fused_ordering(224) 00:27:30.992 fused_ordering(225) 00:27:30.992 fused_ordering(226) 00:27:30.992 fused_ordering(227) 00:27:30.992 fused_ordering(228) 00:27:30.992 fused_ordering(229) 00:27:30.992 fused_ordering(230) 00:27:30.992 fused_ordering(231) 00:27:30.992 fused_ordering(232) 00:27:30.992 fused_ordering(233) 00:27:30.992 fused_ordering(234) 00:27:30.992 fused_ordering(235) 00:27:30.992 fused_ordering(236) 00:27:30.992 fused_ordering(237) 00:27:30.992 fused_ordering(238) 00:27:30.992 fused_ordering(239) 00:27:30.992 fused_ordering(240) 00:27:30.992 fused_ordering(241) 00:27:30.992 fused_ordering(242) 00:27:30.992 fused_ordering(243) 00:27:30.992 fused_ordering(244) 00:27:30.992 fused_ordering(245) 00:27:30.992 fused_ordering(246) 00:27:30.992 fused_ordering(247) 00:27:30.992 fused_ordering(248) 00:27:30.992 fused_ordering(249) 00:27:30.992 fused_ordering(250) 00:27:30.992 fused_ordering(251) 00:27:30.992 fused_ordering(252) 00:27:30.992 fused_ordering(253) 00:27:30.992 fused_ordering(254) 00:27:30.992 fused_ordering(255) 00:27:30.992 fused_ordering(256) 00:27:30.992 fused_ordering(257) 00:27:30.992 fused_ordering(258) 00:27:30.992 fused_ordering(259) 00:27:30.992 fused_ordering(260) 00:27:30.992 fused_ordering(261) 00:27:30.992 fused_ordering(262) 00:27:30.992 fused_ordering(263) 00:27:30.992 fused_ordering(264) 00:27:30.992 fused_ordering(265) 00:27:30.992 fused_ordering(266) 00:27:30.992 fused_ordering(267) 00:27:30.992 fused_ordering(268) 00:27:30.992 fused_ordering(269) 00:27:30.992 fused_ordering(270) 00:27:30.992 fused_ordering(271) 00:27:30.992 fused_ordering(272) 00:27:30.992 fused_ordering(273) 00:27:30.992 fused_ordering(274) 00:27:30.992 fused_ordering(275) 00:27:30.992 fused_ordering(276) 00:27:30.992 fused_ordering(277) 00:27:30.992 fused_ordering(278) 00:27:30.992 fused_ordering(279) 00:27:30.992 fused_ordering(280) 00:27:30.992 fused_ordering(281) 00:27:30.992 fused_ordering(282) 00:27:30.992 fused_ordering(283) 00:27:30.992 fused_ordering(284) 00:27:30.992 fused_ordering(285) 00:27:30.992 fused_ordering(286) 00:27:30.992 fused_ordering(287) 00:27:30.992 fused_ordering(288) 00:27:30.992 fused_ordering(289) 00:27:30.992 fused_ordering(290) 00:27:30.992 fused_ordering(291) 00:27:30.992 fused_ordering(292) 00:27:30.992 fused_ordering(293) 00:27:30.992 fused_ordering(294) 00:27:30.992 fused_ordering(295) 00:27:30.992 fused_ordering(296) 00:27:30.992 fused_ordering(297) 00:27:30.992 fused_ordering(298) 00:27:30.992 fused_ordering(299) 00:27:30.992 fused_ordering(300) 00:27:30.992 fused_ordering(301) 00:27:30.992 fused_ordering(302) 00:27:30.992 fused_ordering(303) 00:27:30.992 fused_ordering(304) 00:27:30.992 fused_ordering(305) 00:27:30.992 fused_ordering(306) 00:27:30.992 fused_ordering(307) 00:27:30.992 fused_ordering(308) 00:27:30.992 fused_ordering(309) 00:27:30.992 fused_ordering(310) 00:27:30.992 fused_ordering(311) 00:27:30.992 fused_ordering(312) 00:27:30.992 fused_ordering(313) 00:27:30.992 fused_ordering(314) 00:27:30.992 fused_ordering(315) 00:27:30.992 fused_ordering(316) 00:27:30.992 fused_ordering(317) 00:27:30.992 fused_ordering(318) 00:27:30.992 fused_ordering(319) 00:27:30.992 fused_ordering(320) 00:27:30.992 fused_ordering(321) 00:27:30.992 fused_ordering(322) 00:27:30.992 fused_ordering(323) 00:27:30.992 fused_ordering(324) 00:27:30.992 fused_ordering(325) 00:27:30.992 fused_ordering(326) 00:27:30.992 fused_ordering(327) 00:27:30.992 fused_ordering(328) 00:27:30.992 fused_ordering(329) 00:27:30.992 fused_ordering(330) 00:27:30.992 fused_ordering(331) 00:27:30.992 fused_ordering(332) 00:27:30.992 fused_ordering(333) 00:27:30.992 fused_ordering(334) 00:27:30.992 fused_ordering(335) 00:27:30.992 fused_ordering(336) 00:27:30.992 fused_ordering(337) 00:27:30.992 fused_ordering(338) 00:27:30.992 fused_ordering(339) 00:27:30.992 fused_ordering(340) 00:27:30.992 fused_ordering(341) 00:27:30.992 fused_ordering(342) 00:27:30.992 fused_ordering(343) 00:27:30.992 fused_ordering(344) 00:27:30.992 fused_ordering(345) 00:27:30.992 fused_ordering(346) 00:27:30.992 fused_ordering(347) 00:27:30.992 fused_ordering(348) 00:27:30.992 fused_ordering(349) 00:27:30.992 fused_ordering(350) 00:27:30.992 fused_ordering(351) 00:27:30.992 fused_ordering(352) 00:27:30.992 fused_ordering(353) 00:27:30.992 fused_ordering(354) 00:27:30.992 fused_ordering(355) 00:27:30.992 fused_ordering(356) 00:27:30.992 fused_ordering(357) 00:27:30.992 fused_ordering(358) 00:27:30.992 fused_ordering(359) 00:27:30.992 fused_ordering(360) 00:27:30.992 fused_ordering(361) 00:27:30.992 fused_ordering(362) 00:27:30.992 fused_ordering(363) 00:27:30.992 fused_ordering(364) 00:27:30.992 fused_ordering(365) 00:27:30.992 fused_ordering(366) 00:27:30.992 fused_ordering(367) 00:27:30.992 fused_ordering(368) 00:27:30.992 fused_ordering(369) 00:27:30.992 fused_ordering(370) 00:27:30.992 fused_ordering(371) 00:27:30.992 fused_ordering(372) 00:27:30.992 fused_ordering(373) 00:27:30.992 fused_ordering(374) 00:27:30.992 fused_ordering(375) 00:27:30.992 fused_ordering(376) 00:27:30.992 fused_ordering(377) 00:27:30.992 fused_ordering(378) 00:27:30.992 fused_ordering(379) 00:27:30.992 fused_ordering(380) 00:27:30.992 fused_ordering(381) 00:27:30.992 fused_ordering(382) 00:27:30.992 fused_ordering(383) 00:27:30.992 fused_ordering(384) 00:27:30.992 fused_ordering(385) 00:27:30.992 fused_ordering(386) 00:27:30.992 fused_ordering(387) 00:27:30.992 fused_ordering(388) 00:27:30.992 fused_ordering(389) 00:27:30.992 fused_ordering(390) 00:27:30.992 fused_ordering(391) 00:27:30.992 fused_ordering(392) 00:27:30.992 fused_ordering(393) 00:27:30.992 fused_ordering(394) 00:27:30.992 fused_ordering(395) 00:27:30.992 fused_ordering(396) 00:27:30.992 fused_ordering(397) 00:27:30.992 fused_ordering(398) 00:27:30.992 fused_ordering(399) 00:27:30.992 fused_ordering(400) 00:27:30.992 fused_ordering(401) 00:27:30.992 fused_ordering(402) 00:27:30.992 fused_ordering(403) 00:27:30.992 fused_ordering(404) 00:27:30.992 fused_ordering(405) 00:27:30.992 fused_ordering(406) 00:27:30.992 fused_ordering(407) 00:27:30.992 fused_ordering(408) 00:27:30.992 fused_ordering(409) 00:27:30.992 fused_ordering(410) 00:27:31.250 fused_ordering(411) 00:27:31.250 fused_ordering(412) 00:27:31.250 fused_ordering(413) 00:27:31.250 fused_ordering(414) 00:27:31.250 fused_ordering(415) 00:27:31.250 fused_ordering(416) 00:27:31.250 fused_ordering(417) 00:27:31.250 fused_ordering(418) 00:27:31.250 fused_ordering(419) 00:27:31.250 fused_ordering(420) 00:27:31.250 fused_ordering(421) 00:27:31.250 fused_ordering(422) 00:27:31.250 fused_ordering(423) 00:27:31.250 fused_ordering(424) 00:27:31.250 fused_ordering(425) 00:27:31.250 fused_ordering(426) 00:27:31.250 fused_ordering(427) 00:27:31.250 fused_ordering(428) 00:27:31.250 fused_ordering(429) 00:27:31.250 fused_ordering(430) 00:27:31.250 fused_ordering(431) 00:27:31.250 fused_ordering(432) 00:27:31.250 fused_ordering(433) 00:27:31.250 fused_ordering(434) 00:27:31.250 fused_ordering(435) 00:27:31.250 fused_ordering(436) 00:27:31.250 fused_ordering(437) 00:27:31.250 fused_ordering(438) 00:27:31.250 fused_ordering(439) 00:27:31.250 fused_ordering(440) 00:27:31.250 fused_ordering(441) 00:27:31.250 fused_ordering(442) 00:27:31.250 fused_ordering(443) 00:27:31.250 fused_ordering(444) 00:27:31.250 fused_ordering(445) 00:27:31.250 fused_ordering(446) 00:27:31.250 fused_ordering(447) 00:27:31.250 fused_ordering(448) 00:27:31.250 fused_ordering(449) 00:27:31.250 fused_ordering(450) 00:27:31.250 fused_ordering(451) 00:27:31.250 fused_ordering(452) 00:27:31.250 fused_ordering(453) 00:27:31.250 fused_ordering(454) 00:27:31.250 fused_ordering(455) 00:27:31.250 fused_ordering(456) 00:27:31.250 fused_ordering(457) 00:27:31.250 fused_ordering(458) 00:27:31.250 fused_ordering(459) 00:27:31.250 fused_ordering(460) 00:27:31.250 fused_ordering(461) 00:27:31.250 fused_ordering(462) 00:27:31.250 fused_ordering(463) 00:27:31.250 fused_ordering(464) 00:27:31.250 fused_ordering(465) 00:27:31.250 fused_ordering(466) 00:27:31.250 fused_ordering(467) 00:27:31.250 fused_ordering(468) 00:27:31.250 fused_ordering(469) 00:27:31.250 fused_ordering(470) 00:27:31.250 fused_ordering(471) 00:27:31.250 fused_ordering(472) 00:27:31.250 fused_ordering(473) 00:27:31.250 fused_ordering(474) 00:27:31.250 fused_ordering(475) 00:27:31.250 fused_ordering(476) 00:27:31.250 fused_ordering(477) 00:27:31.250 fused_ordering(478) 00:27:31.250 fused_ordering(479) 00:27:31.250 fused_ordering(480) 00:27:31.250 fused_ordering(481) 00:27:31.250 fused_ordering(482) 00:27:31.250 fused_ordering(483) 00:27:31.250 fused_ordering(484) 00:27:31.250 fused_ordering(485) 00:27:31.250 fused_ordering(486) 00:27:31.250 fused_ordering(487) 00:27:31.250 fused_ordering(488) 00:27:31.250 fused_ordering(489) 00:27:31.250 fused_ordering(490) 00:27:31.250 fused_ordering(491) 00:27:31.250 fused_ordering(492) 00:27:31.250 fused_ordering(493) 00:27:31.250 fused_ordering(494) 00:27:31.250 fused_ordering(495) 00:27:31.250 fused_ordering(496) 00:27:31.250 fused_ordering(497) 00:27:31.250 fused_ordering(498) 00:27:31.250 fused_ordering(499) 00:27:31.250 fused_ordering(500) 00:27:31.250 fused_ordering(501) 00:27:31.250 fused_ordering(502) 00:27:31.250 fused_ordering(503) 00:27:31.250 fused_ordering(504) 00:27:31.250 fused_ordering(505) 00:27:31.250 fused_ordering(506) 00:27:31.250 fused_ordering(507) 00:27:31.250 fused_ordering(508) 00:27:31.250 fused_ordering(509) 00:27:31.250 fused_ordering(510) 00:27:31.250 fused_ordering(511) 00:27:31.250 fused_ordering(512) 00:27:31.250 fused_ordering(513) 00:27:31.250 fused_ordering(514) 00:27:31.250 fused_ordering(515) 00:27:31.250 fused_ordering(516) 00:27:31.250 fused_ordering(517) 00:27:31.250 fused_ordering(518) 00:27:31.250 fused_ordering(519) 00:27:31.250 fused_ordering(520) 00:27:31.250 fused_ordering(521) 00:27:31.250 fused_ordering(522) 00:27:31.250 fused_ordering(523) 00:27:31.250 fused_ordering(524) 00:27:31.250 fused_ordering(525) 00:27:31.250 fused_ordering(526) 00:27:31.250 fused_ordering(527) 00:27:31.250 fused_ordering(528) 00:27:31.250 fused_ordering(529) 00:27:31.250 fused_ordering(530) 00:27:31.250 fused_ordering(531) 00:27:31.250 fused_ordering(532) 00:27:31.250 fused_ordering(533) 00:27:31.250 fused_ordering(534) 00:27:31.250 fused_ordering(535) 00:27:31.250 fused_ordering(536) 00:27:31.250 fused_ordering(537) 00:27:31.250 fused_ordering(538) 00:27:31.250 fused_ordering(539) 00:27:31.250 fused_ordering(540) 00:27:31.250 fused_ordering(541) 00:27:31.250 fused_ordering(542) 00:27:31.250 fused_ordering(543) 00:27:31.250 fused_ordering(544) 00:27:31.250 fused_ordering(545) 00:27:31.250 fused_ordering(546) 00:27:31.250 fused_ordering(547) 00:27:31.250 fused_ordering(548) 00:27:31.250 fused_ordering(549) 00:27:31.250 fused_ordering(550) 00:27:31.250 fused_ordering(551) 00:27:31.250 fused_ordering(552) 00:27:31.250 fused_ordering(553) 00:27:31.250 fused_ordering(554) 00:27:31.250 fused_ordering(555) 00:27:31.250 fused_ordering(556) 00:27:31.250 fused_ordering(557) 00:27:31.250 fused_ordering(558) 00:27:31.250 fused_ordering(559) 00:27:31.250 fused_ordering(560) 00:27:31.250 fused_ordering(561) 00:27:31.250 fused_ordering(562) 00:27:31.250 fused_ordering(563) 00:27:31.250 fused_ordering(564) 00:27:31.250 fused_ordering(565) 00:27:31.250 fused_ordering(566) 00:27:31.250 fused_ordering(567) 00:27:31.250 fused_ordering(568) 00:27:31.250 fused_ordering(569) 00:27:31.250 fused_ordering(570) 00:27:31.250 fused_ordering(571) 00:27:31.250 fused_ordering(572) 00:27:31.250 fused_ordering(573) 00:27:31.250 fused_ordering(574) 00:27:31.250 fused_ordering(575) 00:27:31.250 fused_ordering(576) 00:27:31.250 fused_ordering(577) 00:27:31.250 fused_ordering(578) 00:27:31.250 fused_ordering(579) 00:27:31.250 fused_ordering(580) 00:27:31.250 fused_ordering(581) 00:27:31.250 fused_ordering(582) 00:27:31.250 fused_ordering(583) 00:27:31.250 fused_ordering(584) 00:27:31.250 fused_ordering(585) 00:27:31.250 fused_ordering(586) 00:27:31.250 fused_ordering(587) 00:27:31.250 fused_ordering(588) 00:27:31.250 fused_ordering(589) 00:27:31.250 fused_ordering(590) 00:27:31.250 fused_ordering(591) 00:27:31.250 fused_ordering(592) 00:27:31.250 fused_ordering(593) 00:27:31.250 fused_ordering(594) 00:27:31.250 fused_ordering(595) 00:27:31.250 fused_ordering(596) 00:27:31.250 fused_ordering(597) 00:27:31.250 fused_ordering(598) 00:27:31.250 fused_ordering(599) 00:27:31.250 fused_ordering(600) 00:27:31.250 fused_ordering(601) 00:27:31.250 fused_ordering(602) 00:27:31.250 fused_ordering(603) 00:27:31.250 fused_ordering(604) 00:27:31.250 fused_ordering(605) 00:27:31.250 fused_ordering(606) 00:27:31.250 fused_ordering(607) 00:27:31.250 fused_ordering(608) 00:27:31.250 fused_ordering(609) 00:27:31.250 fused_ordering(610) 00:27:31.250 fused_ordering(611) 00:27:31.250 fused_ordering(612) 00:27:31.250 fused_ordering(613) 00:27:31.250 fused_ordering(614) 00:27:31.250 fused_ordering(615) 00:27:31.816 fused_ordering(616) 00:27:31.816 fused_ordering(617) 00:27:31.816 fused_ordering(618) 00:27:31.816 fused_ordering(619) 00:27:31.816 fused_ordering(620) 00:27:31.816 fused_ordering(621) 00:27:31.816 fused_ordering(622) 00:27:31.816 fused_ordering(623) 00:27:31.816 fused_ordering(624) 00:27:31.816 fused_ordering(625) 00:27:31.816 fused_ordering(626) 00:27:31.816 fused_ordering(627) 00:27:31.816 fused_ordering(628) 00:27:31.816 fused_ordering(629) 00:27:31.816 fused_ordering(630) 00:27:31.816 fused_ordering(631) 00:27:31.816 fused_ordering(632) 00:27:31.816 fused_ordering(633) 00:27:31.816 fused_ordering(634) 00:27:31.816 fused_ordering(635) 00:27:31.816 fused_ordering(636) 00:27:31.816 fused_ordering(637) 00:27:31.816 fused_ordering(638) 00:27:31.816 fused_ordering(639) 00:27:31.816 fused_ordering(640) 00:27:31.816 fused_ordering(641) 00:27:31.816 fused_ordering(642) 00:27:31.816 fused_ordering(643) 00:27:31.816 fused_ordering(644) 00:27:31.816 fused_ordering(645) 00:27:31.816 fused_ordering(646) 00:27:31.816 fused_ordering(647) 00:27:31.816 fused_ordering(648) 00:27:31.816 fused_ordering(649) 00:27:31.816 fused_ordering(650) 00:27:31.816 fused_ordering(651) 00:27:31.816 fused_ordering(652) 00:27:31.816 fused_ordering(653) 00:27:31.816 fused_ordering(654) 00:27:31.816 fused_ordering(655) 00:27:31.816 fused_ordering(656) 00:27:31.816 fused_ordering(657) 00:27:31.816 fused_ordering(658) 00:27:31.816 fused_ordering(659) 00:27:31.816 fused_ordering(660) 00:27:31.816 fused_ordering(661) 00:27:31.816 fused_ordering(662) 00:27:31.816 fused_ordering(663) 00:27:31.816 fused_ordering(664) 00:27:31.816 fused_ordering(665) 00:27:31.816 fused_ordering(666) 00:27:31.816 fused_ordering(667) 00:27:31.816 fused_ordering(668) 00:27:31.817 fused_ordering(669) 00:27:31.817 fused_ordering(670) 00:27:31.817 fused_ordering(671) 00:27:31.817 fused_ordering(672) 00:27:31.817 fused_ordering(673) 00:27:31.817 fused_ordering(674) 00:27:31.817 fused_ordering(675) 00:27:31.817 fused_ordering(676) 00:27:31.817 fused_ordering(677) 00:27:31.817 fused_ordering(678) 00:27:31.817 fused_ordering(679) 00:27:31.817 fused_ordering(680) 00:27:31.817 fused_ordering(681) 00:27:31.817 fused_ordering(682) 00:27:31.817 fused_ordering(683) 00:27:31.817 fused_ordering(684) 00:27:31.817 fused_ordering(685) 00:27:31.817 fused_ordering(686) 00:27:31.817 fused_ordering(687) 00:27:31.817 fused_ordering(688) 00:27:31.817 fused_ordering(689) 00:27:31.817 fused_ordering(690) 00:27:31.817 fused_ordering(691) 00:27:31.817 fused_ordering(692) 00:27:31.817 fused_ordering(693) 00:27:31.817 fused_ordering(694) 00:27:31.817 fused_ordering(695) 00:27:31.817 fused_ordering(696) 00:27:31.817 fused_ordering(697) 00:27:31.817 fused_ordering(698) 00:27:31.817 fused_ordering(699) 00:27:31.817 fused_ordering(700) 00:27:31.817 fused_ordering(701) 00:27:31.817 fused_ordering(702) 00:27:31.817 fused_ordering(703) 00:27:31.817 fused_ordering(704) 00:27:31.817 fused_ordering(705) 00:27:31.817 fused_ordering(706) 00:27:31.817 fused_ordering(707) 00:27:31.817 fused_ordering(708) 00:27:31.817 fused_ordering(709) 00:27:31.817 fused_ordering(710) 00:27:31.817 fused_ordering(711) 00:27:31.817 fused_ordering(712) 00:27:31.817 fused_ordering(713) 00:27:31.817 fused_ordering(714) 00:27:31.817 fused_ordering(715) 00:27:31.817 fused_ordering(716) 00:27:31.817 fused_ordering(717) 00:27:31.817 fused_ordering(718) 00:27:31.817 fused_ordering(719) 00:27:31.817 fused_ordering(720) 00:27:31.817 fused_ordering(721) 00:27:31.817 fused_ordering(722) 00:27:31.817 fused_ordering(723) 00:27:31.817 fused_ordering(724) 00:27:31.817 fused_ordering(725) 00:27:31.817 fused_ordering(726) 00:27:31.817 fused_ordering(727) 00:27:31.817 fused_ordering(728) 00:27:31.817 fused_ordering(729) 00:27:31.817 fused_ordering(730) 00:27:31.817 fused_ordering(731) 00:27:31.817 fused_ordering(732) 00:27:31.817 fused_ordering(733) 00:27:31.817 fused_ordering(734) 00:27:31.817 fused_ordering(735) 00:27:31.817 fused_ordering(736) 00:27:31.817 fused_ordering(737) 00:27:31.817 fused_ordering(738) 00:27:31.817 fused_ordering(739) 00:27:31.817 fused_ordering(740) 00:27:31.817 fused_ordering(741) 00:27:31.817 fused_ordering(742) 00:27:31.817 fused_ordering(743) 00:27:31.817 fused_ordering(744) 00:27:31.817 fused_ordering(745) 00:27:31.817 fused_ordering(746) 00:27:31.817 fused_ordering(747) 00:27:31.817 fused_ordering(748) 00:27:31.817 fused_ordering(749) 00:27:31.817 fused_ordering(750) 00:27:31.817 fused_ordering(751) 00:27:31.817 fused_ordering(752) 00:27:31.817 fused_ordering(753) 00:27:31.817 fused_ordering(754) 00:27:31.817 fused_ordering(755) 00:27:31.817 fused_ordering(756) 00:27:31.817 fused_ordering(757) 00:27:31.817 fused_ordering(758) 00:27:31.817 fused_ordering(759) 00:27:31.817 fused_ordering(760) 00:27:31.817 fused_ordering(761) 00:27:31.817 fused_ordering(762) 00:27:31.817 fused_ordering(763) 00:27:31.817 fused_ordering(764) 00:27:31.817 fused_ordering(765) 00:27:31.817 fused_ordering(766) 00:27:31.817 fused_ordering(767) 00:27:31.817 fused_ordering(768) 00:27:31.817 fused_ordering(769) 00:27:31.817 fused_ordering(770) 00:27:31.817 fused_ordering(771) 00:27:31.817 fused_ordering(772) 00:27:31.817 fused_ordering(773) 00:27:31.817 fused_ordering(774) 00:27:31.817 fused_ordering(775) 00:27:31.817 fused_ordering(776) 00:27:31.817 fused_ordering(777) 00:27:31.817 fused_ordering(778) 00:27:31.817 fused_ordering(779) 00:27:31.817 fused_ordering(780) 00:27:31.817 fused_ordering(781) 00:27:31.817 fused_ordering(782) 00:27:31.817 fused_ordering(783) 00:27:31.817 fused_ordering(784) 00:27:31.817 fused_ordering(785) 00:27:31.817 fused_ordering(786) 00:27:31.817 fused_ordering(787) 00:27:31.817 fused_ordering(788) 00:27:31.817 fused_ordering(789) 00:27:31.817 fused_ordering(790) 00:27:31.817 fused_ordering(791) 00:27:31.817 fused_ordering(792) 00:27:31.817 fused_ordering(793) 00:27:31.817 fused_ordering(794) 00:27:31.817 fused_ordering(795) 00:27:31.817 fused_ordering(796) 00:27:31.817 fused_ordering(797) 00:27:31.817 fused_ordering(798) 00:27:31.817 fused_ordering(799) 00:27:31.817 fused_ordering(800) 00:27:31.817 fused_ordering(801) 00:27:31.817 fused_ordering(802) 00:27:31.817 fused_ordering(803) 00:27:31.817 fused_ordering(804) 00:27:31.817 fused_ordering(805) 00:27:31.817 fused_ordering(806) 00:27:31.817 fused_ordering(807) 00:27:31.817 fused_ordering(808) 00:27:31.817 fused_ordering(809) 00:27:31.817 fused_ordering(810) 00:27:31.817 fused_ordering(811) 00:27:31.817 fused_ordering(812) 00:27:31.817 fused_ordering(813) 00:27:31.817 fused_ordering(814) 00:27:31.817 fused_ordering(815) 00:27:31.817 fused_ordering(816) 00:27:31.817 fused_ordering(817) 00:27:31.817 fused_ordering(818) 00:27:31.817 fused_ordering(819) 00:27:31.817 fused_ordering(820) 00:27:32.383 fused_ordering(821) 00:27:32.383 fused_ordering(822) 00:27:32.383 fused_ordering(823) 00:27:32.383 fused_ordering(824) 00:27:32.383 fused_ordering(825) 00:27:32.383 fused_ordering(826) 00:27:32.383 fused_ordering(827) 00:27:32.383 fused_ordering(828) 00:27:32.383 fused_ordering(829) 00:27:32.383 fused_ordering(830) 00:27:32.383 fused_ordering(831) 00:27:32.383 fused_ordering(832) 00:27:32.383 fused_ordering(833) 00:27:32.383 fused_ordering(834) 00:27:32.383 fused_ordering(835) 00:27:32.383 fused_ordering(836) 00:27:32.383 fused_ordering(837) 00:27:32.383 fused_ordering(838) 00:27:32.383 fused_ordering(839) 00:27:32.383 fused_ordering(840) 00:27:32.383 fused_ordering(841) 00:27:32.383 fused_ordering(842) 00:27:32.383 fused_ordering(843) 00:27:32.383 fused_ordering(844) 00:27:32.383 fused_ordering(845) 00:27:32.383 fused_ordering(846) 00:27:32.383 fused_ordering(847) 00:27:32.383 fused_ordering(848) 00:27:32.383 fused_ordering(849) 00:27:32.383 fused_ordering(850) 00:27:32.383 fused_ordering(851) 00:27:32.383 fused_ordering(852) 00:27:32.383 fused_ordering(853) 00:27:32.383 fused_ordering(854) 00:27:32.383 fused_ordering(855) 00:27:32.383 fused_ordering(856) 00:27:32.383 fused_ordering(857) 00:27:32.383 fused_ordering(858) 00:27:32.383 fused_ordering(859) 00:27:32.383 fused_ordering(860) 00:27:32.383 fused_ordering(861) 00:27:32.383 fused_ordering(862) 00:27:32.383 fused_ordering(863) 00:27:32.383 fused_ordering(864) 00:27:32.383 fused_ordering(865) 00:27:32.383 fused_ordering(866) 00:27:32.383 fused_ordering(867) 00:27:32.383 fused_ordering(868) 00:27:32.383 fused_ordering(869) 00:27:32.383 fused_ordering(870) 00:27:32.383 fused_ordering(871) 00:27:32.383 fused_ordering(872) 00:27:32.383 fused_ordering(873) 00:27:32.383 fused_ordering(874) 00:27:32.383 fused_ordering(875) 00:27:32.383 fused_ordering(876) 00:27:32.383 fused_ordering(877) 00:27:32.383 fused_ordering(878) 00:27:32.383 fused_ordering(879) 00:27:32.383 fused_ordering(880) 00:27:32.383 fused_ordering(881) 00:27:32.383 fused_ordering(882) 00:27:32.383 fused_ordering(883) 00:27:32.383 fused_ordering(884) 00:27:32.383 fused_ordering(885) 00:27:32.383 fused_ordering(886) 00:27:32.383 fused_ordering(887) 00:27:32.383 fused_ordering(888) 00:27:32.383 fused_ordering(889) 00:27:32.383 fused_ordering(890) 00:27:32.383 fused_ordering(891) 00:27:32.383 fused_ordering(892) 00:27:32.383 fused_ordering(893) 00:27:32.383 fused_ordering(894) 00:27:32.383 fused_ordering(895) 00:27:32.383 fused_ordering(896) 00:27:32.383 fused_ordering(897) 00:27:32.383 fused_ordering(898) 00:27:32.383 fused_ordering(899) 00:27:32.383 fused_ordering(900) 00:27:32.383 fused_ordering(901) 00:27:32.383 fused_ordering(902) 00:27:32.383 fused_ordering(903) 00:27:32.383 fused_ordering(904) 00:27:32.383 fused_ordering(905) 00:27:32.383 fused_ordering(906) 00:27:32.383 fused_ordering(907) 00:27:32.383 fused_ordering(908) 00:27:32.383 fused_ordering(909) 00:27:32.383 fused_ordering(910) 00:27:32.383 fused_ordering(911) 00:27:32.383 fused_ordering(912) 00:27:32.383 fused_ordering(913) 00:27:32.383 fused_ordering(914) 00:27:32.383 fused_ordering(915) 00:27:32.383 fused_ordering(916) 00:27:32.383 fused_ordering(917) 00:27:32.383 fused_ordering(918) 00:27:32.383 fused_ordering(919) 00:27:32.383 fused_ordering(920) 00:27:32.383 fused_ordering(921) 00:27:32.383 fused_ordering(922) 00:27:32.383 fused_ordering(923) 00:27:32.383 fused_ordering(924) 00:27:32.383 fused_ordering(925) 00:27:32.383 fused_ordering(926) 00:27:32.383 fused_ordering(927) 00:27:32.383 fused_ordering(928) 00:27:32.383 fused_ordering(929) 00:27:32.383 fused_ordering(930) 00:27:32.384 fused_ordering(931) 00:27:32.384 fused_ordering(932) 00:27:32.384 fused_ordering(933) 00:27:32.384 fused_ordering(934) 00:27:32.384 fused_ordering(935) 00:27:32.384 fused_ordering(936) 00:27:32.384 fused_ordering(937) 00:27:32.384 fused_ordering(938) 00:27:32.384 fused_ordering(939) 00:27:32.384 fused_ordering(940) 00:27:32.384 fused_ordering(941) 00:27:32.384 fused_ordering(942) 00:27:32.384 fused_ordering(943) 00:27:32.384 fused_ordering(944) 00:27:32.384 fused_ordering(945) 00:27:32.384 fused_ordering(946) 00:27:32.384 fused_ordering(947) 00:27:32.384 fused_ordering(948) 00:27:32.384 fused_ordering(949) 00:27:32.384 fused_ordering(950) 00:27:32.384 fused_ordering(951) 00:27:32.384 fused_ordering(952) 00:27:32.384 fused_ordering(953) 00:27:32.384 fused_ordering(954) 00:27:32.384 fused_ordering(955) 00:27:32.384 fused_ordering(956) 00:27:32.384 fused_ordering(957) 00:27:32.384 fused_ordering(958) 00:27:32.384 fused_ordering(959) 00:27:32.384 fused_ordering(960) 00:27:32.384 fused_ordering(961) 00:27:32.384 fused_ordering(962) 00:27:32.384 fused_ordering(963) 00:27:32.384 fused_ordering(964) 00:27:32.384 fused_ordering(965) 00:27:32.384 fused_ordering(966) 00:27:32.384 fused_ordering(967) 00:27:32.384 fused_ordering(968) 00:27:32.384 fused_ordering(969) 00:27:32.384 fused_ordering(970) 00:27:32.384 fused_ordering(971) 00:27:32.384 fused_ordering(972) 00:27:32.384 fused_ordering(973) 00:27:32.384 fused_ordering(974) 00:27:32.384 fused_ordering(975) 00:27:32.384 fused_ordering(976) 00:27:32.384 fused_ordering(977) 00:27:32.384 fused_ordering(978) 00:27:32.384 fused_ordering(979) 00:27:32.384 fused_ordering(980) 00:27:32.384 fused_ordering(981) 00:27:32.384 fused_ordering(982) 00:27:32.384 fused_ordering(983) 00:27:32.384 fused_ordering(984) 00:27:32.384 fused_ordering(985) 00:27:32.384 fused_ordering(986) 00:27:32.384 fused_ordering(987) 00:27:32.384 fused_ordering(988) 00:27:32.384 fused_ordering(989) 00:27:32.384 fused_ordering(990) 00:27:32.384 fused_ordering(991) 00:27:32.384 fused_ordering(992) 00:27:32.384 fused_ordering(993) 00:27:32.384 fused_ordering(994) 00:27:32.384 fused_ordering(995) 00:27:32.384 fused_ordering(996) 00:27:32.384 fused_ordering(997) 00:27:32.384 fused_ordering(998) 00:27:32.384 fused_ordering(999) 00:27:32.384 fused_ordering(1000) 00:27:32.384 fused_ordering(1001) 00:27:32.384 fused_ordering(1002) 00:27:32.384 fused_ordering(1003) 00:27:32.384 fused_ordering(1004) 00:27:32.384 fused_ordering(1005) 00:27:32.384 fused_ordering(1006) 00:27:32.384 fused_ordering(1007) 00:27:32.384 fused_ordering(1008) 00:27:32.384 fused_ordering(1009) 00:27:32.384 fused_ordering(1010) 00:27:32.384 fused_ordering(1011) 00:27:32.384 fused_ordering(1012) 00:27:32.384 fused_ordering(1013) 00:27:32.384 fused_ordering(1014) 00:27:32.384 fused_ordering(1015) 00:27:32.384 fused_ordering(1016) 00:27:32.384 fused_ordering(1017) 00:27:32.384 fused_ordering(1018) 00:27:32.384 fused_ordering(1019) 00:27:32.384 fused_ordering(1020) 00:27:32.384 fused_ordering(1021) 00:27:32.384 fused_ordering(1022) 00:27:32.384 fused_ordering(1023) 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:32.384 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:32.384 rmmod nvme_tcp 00:27:32.384 rmmod nvme_fabrics 00:27:32.384 rmmod nvme_keyring 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 75611 ']' 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 75611 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75611 ']' 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75611 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.384 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75611 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:32.642 killing process with pid 75611 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75611' 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75611 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75611 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@254 -- # local dev 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:32.642 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:32.643 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # continue 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # continue 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@274 -- # iptr 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-save 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-restore 00:27:32.901 ************************************ 00:27:32.901 END TEST nvmf_fused_ordering 00:27:32.901 ************************************ 00:27:32.901 00:27:32.901 real 0m4.065s 00:27:32.901 user 0m4.449s 00:27:32.901 sys 0m1.742s 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:27:32.901 11:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:27:32.902 11:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:32.902 11:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.902 11:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:32.902 ************************************ 00:27:32.902 START TEST nvmf_ns_masking 00:27:32.902 ************************************ 00:27:32.902 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:27:33.162 * Looking for test storage... 00:27:33.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.162 --rc genhtml_branch_coverage=1 00:27:33.162 --rc genhtml_function_coverage=1 00:27:33.162 --rc genhtml_legend=1 00:27:33.162 --rc geninfo_all_blocks=1 00:27:33.162 --rc geninfo_unexecuted_blocks=1 00:27:33.162 00:27:33.162 ' 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.162 --rc genhtml_branch_coverage=1 00:27:33.162 --rc genhtml_function_coverage=1 00:27:33.162 --rc genhtml_legend=1 00:27:33.162 --rc geninfo_all_blocks=1 00:27:33.162 --rc geninfo_unexecuted_blocks=1 00:27:33.162 00:27:33.162 ' 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.162 --rc genhtml_branch_coverage=1 00:27:33.162 --rc genhtml_function_coverage=1 00:27:33.162 --rc genhtml_legend=1 00:27:33.162 --rc geninfo_all_blocks=1 00:27:33.162 --rc geninfo_unexecuted_blocks=1 00:27:33.162 00:27:33.162 ' 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:33.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.162 --rc genhtml_branch_coverage=1 00:27:33.162 --rc genhtml_function_coverage=1 00:27:33.162 --rc genhtml_legend=1 00:27:33.162 --rc geninfo_all_blocks=1 00:27:33.162 --rc geninfo_unexecuted_blocks=1 00:27:33.162 00:27:33.162 ' 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:33.162 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:33.163 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=060ba3bd-6a49-41d4-8c84-8e3c271cda1a 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8a71c0b0-1771-4c17-bc50-888a4484e1b3 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a5c05d80-50b9-4fab-9229-af4ab303b7f7 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@223 -- # create_target_ns 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # return 0 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:33.163 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up target0 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:33.164 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:33.423 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:33.424 10.0.0.1 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:33.424 10.0.0.2 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up target1 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772163 00:27:33.424 11:10:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:33.424 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:33.424 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:33.424 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:33.424 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:33.424 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:33.424 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:33.424 10.0.0.3 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772164 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:33.425 10.0.0.4 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:33.425 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:33.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:27:33.686 00:27:33.686 --- 10.0.0.1 ping statistics --- 00:27:33.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.686 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target0 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:33.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:27:33.686 00:27:33.686 --- 10.0.0.2 ping statistics --- 00:27:33.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.686 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:33.686 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:33.686 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:27:33.686 00:27:33.686 --- 10.0.0.3 ping statistics --- 00:27:33.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.686 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:33.686 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:33.687 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:33.687 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:27:33.687 00:27:33.687 --- 10.0.0.4 ping statistics --- 00:27:33.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.687 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # return 0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target0 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target1 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:33.687 ' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:33.687 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=75901 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 75901 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75901 ']' 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.688 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:33.946 [2024-12-05 11:10:58.363504] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:33.947 [2024-12-05 11:10:58.363655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.947 [2024-12-05 11:10:58.525149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.947 [2024-12-05 11:10:58.588826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.947 [2024-12-05 11:10:58.588888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.947 [2024-12-05 11:10:58.588903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.947 [2024-12-05 11:10:58.588917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.947 [2024-12-05 11:10:58.588928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.947 [2024-12-05 11:10:58.589310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.204 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.204 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:27:34.204 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:34.204 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.205 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:34.205 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.205 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:34.463 [2024-12-05 11:10:59.038893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.463 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:27:34.463 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:27:34.463 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:34.722 Malloc1 00:27:34.722 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:27:34.980 Malloc2 00:27:34.980 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:35.263 11:10:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:27:35.832 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.832 [2024-12-05 11:11:00.408437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.832 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:27:35.832 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5c05d80-50b9-4fab-9229-af4ab303b7f7 -a 10.0.0.2 -s 4420 -i 4 00:27:36.090 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:27:36.090 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:27:36.090 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:36.090 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:36.090 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:37.989 [ 0]:0x1 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:37.989 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:38.246 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc0083b91044df8921eb9ba3a9732fc 00:27:38.247 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc0083b91044df8921eb9ba3a9732fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:38.247 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:27:38.504 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:27:38.504 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:38.504 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:38.504 [ 0]:0x1 00:27:38.504 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:38.504 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:38.504 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc0083b91044df8921eb9ba3a9732fc 00:27:38.504 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc0083b91044df8921eb9ba3a9732fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:38.504 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:38.505 [ 1]:0x2 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49e5acde3ab1404a8573ebdb95ac642f 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49e5acde3ab1404a8573ebdb95ac642f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:27:38.505 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:38.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:38.816 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.076 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:27:39.334 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:27:39.334 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5c05d80-50b9-4fab-9229-af4ab303b7f7 -a 10.0.0.2 -s 4420 -i 4 00:27:39.334 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:27:39.334 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:27:39.334 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:39.335 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:27:39.335 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:27:39.335 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:41.863 11:11:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:41.863 [ 0]:0x2 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49e5acde3ab1404a8573ebdb95ac642f 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49e5acde3ab1404a8573ebdb95ac642f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:41.863 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:41.864 [ 0]:0x1 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc0083b91044df8921eb9ba3a9732fc 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc0083b91044df8921eb9ba3a9732fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:41.864 [ 1]:0x2 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:41.864 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:42.123 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49e5acde3ab1404a8573ebdb95ac642f 00:27:42.123 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49e5acde3ab1404a8573ebdb95ac642f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:42.123 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:42.381 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:27:42.381 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:42.381 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:42.381 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:42.381 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.381 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:42.381 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:42.382 [ 0]:0x2 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:42.382 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:42.382 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49e5acde3ab1404a8573ebdb95ac642f 00:27:42.382 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49e5acde3ab1404a8573ebdb95ac642f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:42.382 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:27:42.382 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:42.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:42.640 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5c05d80-50b9-4fab-9229-af4ab303b7f7 -a 10.0.0.2 -s 4420 -i 4 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:27:42.899 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:27:44.801 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:45.060 [ 0]:0x1 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bcc0083b91044df8921eb9ba3a9732fc 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bcc0083b91044df8921eb9ba3a9732fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:45.060 [ 1]:0x2 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49e5acde3ab1404a8573ebdb95ac642f 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49e5acde3ab1404a8573ebdb95ac642f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:45.060 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:45.319 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:45.578 [ 0]:0x2 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:45.578 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:45.578 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49e5acde3ab1404a8573ebdb95ac642f 00:27:45.578 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49e5acde3ab1404a8573ebdb95ac642f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:45.578 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:27:45.578 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:45.579 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:27:45.838 [2024-12-05 11:11:10.374900] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:27:45.838 2024/12/05 11:11:10 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:27:45.838 request: 00:27:45.838 { 00:27:45.838 "method": "nvmf_ns_remove_host", 00:27:45.838 "params": { 00:27:45.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:45.838 "nsid": 2, 00:27:45.838 "host": "nqn.2016-06.io.spdk:host1" 00:27:45.838 } 00:27:45.838 } 00:27:45.838 Got JSON-RPC error response 00:27:45.838 GoRPCClient: error on JSON-RPC call 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:27:45.838 [ 0]:0x2 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:27:45.838 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=49e5acde3ab1404a8573ebdb95ac642f 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 49e5acde3ab1404a8573ebdb95ac642f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:46.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76271 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76271 /var/tmp/host.sock 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76271 ']' 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.128 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:46.128 [2024-12-05 11:11:10.636349] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:46.128 [2024-12-05 11:11:10.636478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76271 ] 00:27:46.386 [2024-12-05 11:11:10.797244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.386 [2024-12-05 11:11:10.866989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.324 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.324 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:27:47.324 11:11:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.583 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:47.842 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 060ba3bd-6a49-41d4-8c84-8e3c271cda1a 00:27:47.842 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:27:47.842 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 060BA3BD6A4941D48C848E3C271CDA1A -i 00:27:48.101 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8a71c0b0-1771-4c17-bc50-888a4484e1b3 00:27:48.101 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:27:48.101 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8A71C0B017714C17BC50888A4484E1B3 -i 00:27:48.361 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:27:48.926 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:27:48.926 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:27:48.926 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:27:49.544 nvme0n1 00:27:49.544 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:27:49.544 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:27:49.544 nvme1n2 00:27:49.544 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:27:49.544 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:27:49.544 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:27:49.544 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:27:49.544 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:27:49.802 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:27:49.802 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:27:49.802 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:27:49.802 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:27:50.368 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 060ba3bd-6a49-41d4-8c84-8e3c271cda1a == \0\6\0\b\a\3\b\d\-\6\a\4\9\-\4\1\d\4\-\8\c\8\4\-\8\e\3\c\2\7\1\c\d\a\1\a ]] 00:27:50.368 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:27:50.368 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:27:50.368 11:11:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:27:50.634 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8a71c0b0-1771-4c17-bc50-888a4484e1b3 == \8\a\7\1\c\0\b\0\-\1\7\7\1\-\4\c\1\7\-\b\c\5\0\-\8\8\8\a\4\4\8\4\e\1\b\3 ]] 00:27:50.634 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.892 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 060ba3bd-6a49-41d4-8c84-8e3c271cda1a 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 060BA3BD6A4941D48C848E3C271CDA1A 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 060BA3BD6A4941D48C848E3C271CDA1A 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:51.151 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 060BA3BD6A4941D48C848E3C271CDA1A 00:27:51.410 [2024-12-05 11:11:15.965202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:27:51.410 [2024-12-05 11:11:15.965255] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:27:51.410 [2024-12-05 11:11:15.965269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:27:51.410 2024/12/05 11:11:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:060BA3BD6A4941D48C848E3C271CDA1A no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:27:51.410 request: 00:27:51.410 { 00:27:51.410 "method": "nvmf_subsystem_add_ns", 00:27:51.410 "params": { 00:27:51.410 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.410 "namespace": { 00:27:51.410 "bdev_name": "invalid", 00:27:51.410 "nsid": 1, 00:27:51.410 "nguid": "060BA3BD6A4941D48C848E3C271CDA1A", 00:27:51.410 "no_auto_visible": false, 00:27:51.410 "hide_metadata": false 00:27:51.410 } 00:27:51.410 } 00:27:51.410 } 00:27:51.410 Got JSON-RPC error response 00:27:51.410 GoRPCClient: error on JSON-RPC call 00:27:51.410 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:27:51.410 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:51.410 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:51.410 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:51.410 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 060ba3bd-6a49-41d4-8c84-8e3c271cda1a 00:27:51.410 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:27:51.410 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 060BA3BD6A4941D48C848E3C271CDA1A -i 00:27:51.669 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76271 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76271 ']' 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76271 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76271 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:54.202 killing process with pid 76271 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76271' 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76271 00:27:54.202 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76271 00:27:54.461 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:55.027 rmmod nvme_tcp 00:27:55.027 rmmod nvme_fabrics 00:27:55.027 rmmod nvme_keyring 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 75901 ']' 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 75901 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75901 ']' 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75901 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75901 00:27:55.027 killing process with pid 75901 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75901' 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75901 00:27:55.027 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75901 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@254 -- # local dev 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # continue 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # continue 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@274 -- # iptr 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:55.287 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-save 00:27:55.288 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-restore 00:27:55.288 00:27:55.288 real 0m22.450s 00:27:55.288 user 0m37.647s 00:27:55.288 sys 0m4.480s 00:27:55.288 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.288 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:27:55.288 ************************************ 00:27:55.288 END TEST nvmf_ns_masking 00:27:55.288 ************************************ 00:27:55.553 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:27:55.553 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:27:55.553 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:55.553 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.553 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.553 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:55.553 ************************************ 00:27:55.553 START TEST nvmf_auth_target 00:27:55.553 ************************************ 00:27:55.553 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:55.553 * Looking for test storage... 00:27:55.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:27:55.553 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.554 --rc genhtml_branch_coverage=1 00:27:55.554 --rc genhtml_function_coverage=1 00:27:55.554 --rc genhtml_legend=1 00:27:55.554 --rc geninfo_all_blocks=1 00:27:55.554 --rc geninfo_unexecuted_blocks=1 00:27:55.554 00:27:55.554 ' 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.554 --rc genhtml_branch_coverage=1 00:27:55.554 --rc genhtml_function_coverage=1 00:27:55.554 --rc genhtml_legend=1 00:27:55.554 --rc geninfo_all_blocks=1 00:27:55.554 --rc geninfo_unexecuted_blocks=1 00:27:55.554 00:27:55.554 ' 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.554 --rc genhtml_branch_coverage=1 00:27:55.554 --rc genhtml_function_coverage=1 00:27:55.554 --rc genhtml_legend=1 00:27:55.554 --rc geninfo_all_blocks=1 00:27:55.554 --rc geninfo_unexecuted_blocks=1 00:27:55.554 00:27:55.554 ' 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:55.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.554 --rc genhtml_branch_coverage=1 00:27:55.554 --rc genhtml_function_coverage=1 00:27:55.554 --rc genhtml_legend=1 00:27:55.554 --rc geninfo_all_blocks=1 00:27:55.554 --rc geninfo_unexecuted_blocks=1 00:27:55.554 00:27:55.554 ' 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.554 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.813 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:55.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@223 -- # create_target_ns 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target0 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:55.814 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:55.815 10.0.0.1 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:55.815 10.0.0.2 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target1 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:55.815 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772163 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:56.076 10.0.0.3 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772164 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:56.076 10.0.0.4 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:56.076 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:56.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:27:56.077 00:27:56.077 --- 10.0.0.1 ping statistics --- 00:27:56.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.077 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:56.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:27:56.077 00:27:56.077 --- 10.0.0.2 ping statistics --- 00:27:56.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.077 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:56.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:56.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:27:56.077 00:27:56.077 --- 10.0.0.3 ping statistics --- 00:27:56.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.077 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:56.077 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:56.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:56.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:27:56.078 00:27:56.078 --- 10.0.0.4 ping statistics --- 00:27:56.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.078 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # return 0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:56.078 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:56.337 ' 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=76777 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 76777 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76777 ']' 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.337 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.271 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.271 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:27:57.271 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:57.271 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:57.271 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76821 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=cb250a55d833483aa220869647a35a6c1c1aecc83d8c5efa 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Do5 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key cb250a55d833483aa220869647a35a6c1c1aecc83d8c5efa 0 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 cb250a55d833483aa220869647a35a6c1c1aecc83d8c5efa 0 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=cb250a55d833483aa220869647a35a6c1c1aecc83d8c5efa 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:27:57.530 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Do5 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Do5 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Do5 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=1b3532279273518b44676f22af5ce4c6d5a66626ca81bec2597c0f4eda36486f 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.8Sf 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 1b3532279273518b44676f22af5ce4c6d5a66626ca81bec2597c0f4eda36486f 3 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 1b3532279273518b44676f22af5ce4c6d5a66626ca81bec2597c0f4eda36486f 3 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=1b3532279273518b44676f22af5ce4c6d5a66626ca81bec2597c0f4eda36486f 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.8Sf 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.8Sf 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.8Sf 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=416e07b4f5e3d29857ec104baccaba0a 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.uIM 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 416e07b4f5e3d29857ec104baccaba0a 1 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 416e07b4f5e3d29857ec104baccaba0a 1 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=416e07b4f5e3d29857ec104baccaba0a 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:27:57.530 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.uIM 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.uIM 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.uIM 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=ca24db2b7f95aea4d3e42ed8ca3b8c814f0a17f55dbf4674 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.he7 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key ca24db2b7f95aea4d3e42ed8ca3b8c814f0a17f55dbf4674 2 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 ca24db2b7f95aea4d3e42ed8ca3b8c814f0a17f55dbf4674 2 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=ca24db2b7f95aea4d3e42ed8ca3b8c814f0a17f55dbf4674 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.he7 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.he7 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.he7 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=a745f8c947fda5f42592a33fe99de5e8c3d75a67ab12a748 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.YMX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key a745f8c947fda5f42592a33fe99de5e8c3d75a67ab12a748 2 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 a745f8c947fda5f42592a33fe99de5e8c3d75a67ab12a748 2 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=a745f8c947fda5f42592a33fe99de5e8c3d75a67ab12a748 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.YMX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.YMX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.YMX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=227cf2a54c58e3a062d8017602f581ae 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.QZT 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 227cf2a54c58e3a062d8017602f581ae 1 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 227cf2a54c58e3a062d8017602f581ae 1 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=227cf2a54c58e3a062d8017602f581ae 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.QZT 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.QZT 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.QZT 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=8bd1e79a7a14a0328f14a48b9b82ce67d5154a9fac511fdd83783a3c2c8ae14b 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.g7a 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 8bd1e79a7a14a0328f14a48b9b82ce67d5154a9fac511fdd83783a3c2c8ae14b 3 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 8bd1e79a7a14a0328f14a48b9b82ce67d5154a9fac511fdd83783a3c2c8ae14b 3 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=8bd1e79a7a14a0328f14a48b9b82ce67d5154a9fac511fdd83783a3c2c8ae14b 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:27:57.789 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.g7a 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.g7a 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.g7a 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76777 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76777 ']' 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.048 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76821 /var/tmp/host.sock 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76821 ']' 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.305 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Do5 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Do5 00:27:58.562 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Do5 00:27:58.819 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.8Sf ]] 00:27:58.819 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Sf 00:27:58.819 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.819 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.819 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.819 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Sf 00:27:58.819 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Sf 00:27:59.077 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:27:59.077 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uIM 00:27:59.077 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.077 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.077 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.077 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uIM 00:27:59.077 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uIM 00:27:59.643 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.he7 ]] 00:27:59.643 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.he7 00:27:59.643 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.643 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.643 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.643 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.he7 00:27:59.643 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.he7 00:27:59.901 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:27:59.901 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YMX 00:27:59.901 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.901 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.901 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.901 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YMX 00:27:59.901 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YMX 00:28:00.160 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.QZT ]] 00:28:00.160 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QZT 00:28:00.160 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.160 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.160 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.160 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QZT 00:28:00.160 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QZT 00:28:00.418 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:28:00.418 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.g7a 00:28:00.418 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.418 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.418 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.418 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.g7a 00:28:00.418 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.g7a 00:28:00.677 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:28:00.677 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:28:00.677 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.677 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:00.677 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:00.677 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.935 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.500 00:28:01.500 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:01.500 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:01.500 11:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:01.758 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.758 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:01.758 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.758 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:01.759 { 00:28:01.759 "auth": { 00:28:01.759 "dhgroup": "null", 00:28:01.759 "digest": "sha256", 00:28:01.759 "state": "completed" 00:28:01.759 }, 00:28:01.759 "cntlid": 1, 00:28:01.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:01.759 "listen_address": { 00:28:01.759 "adrfam": "IPv4", 00:28:01.759 "traddr": "10.0.0.2", 00:28:01.759 "trsvcid": "4420", 00:28:01.759 "trtype": "TCP" 00:28:01.759 }, 00:28:01.759 "peer_address": { 00:28:01.759 "adrfam": "IPv4", 00:28:01.759 "traddr": "10.0.0.1", 00:28:01.759 "trsvcid": "35870", 00:28:01.759 "trtype": "TCP" 00:28:01.759 }, 00:28:01.759 "qid": 0, 00:28:01.759 "state": "enabled", 00:28:01.759 "thread": "nvmf_tgt_poll_group_000" 00:28:01.759 } 00:28:01.759 ]' 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:01.759 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:02.325 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:02.325 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:06.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:06.506 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.071 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.329 00:28:07.329 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:07.329 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:07.329 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:07.587 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.587 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:07.587 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.587 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:07.845 { 00:28:07.845 "auth": { 00:28:07.845 "dhgroup": "null", 00:28:07.845 "digest": "sha256", 00:28:07.845 "state": "completed" 00:28:07.845 }, 00:28:07.845 "cntlid": 3, 00:28:07.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:07.845 "listen_address": { 00:28:07.845 "adrfam": "IPv4", 00:28:07.845 "traddr": "10.0.0.2", 00:28:07.845 "trsvcid": "4420", 00:28:07.845 "trtype": "TCP" 00:28:07.845 }, 00:28:07.845 "peer_address": { 00:28:07.845 "adrfam": "IPv4", 00:28:07.845 "traddr": "10.0.0.1", 00:28:07.845 "trsvcid": "34552", 00:28:07.845 "trtype": "TCP" 00:28:07.845 }, 00:28:07.845 "qid": 0, 00:28:07.845 "state": "enabled", 00:28:07.845 "thread": "nvmf_tgt_poll_group_000" 00:28:07.845 } 00:28:07.845 ]' 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:07.845 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:08.411 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:08.411 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:08.978 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:08.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:08.979 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:08.979 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.979 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.979 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.979 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:08.979 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:08.979 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:09.545 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:28:09.545 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:09.545 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:09.545 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:09.545 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.546 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.804 00:28:09.804 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:09.804 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:09.804 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:10.062 { 00:28:10.062 "auth": { 00:28:10.062 "dhgroup": "null", 00:28:10.062 "digest": "sha256", 00:28:10.062 "state": "completed" 00:28:10.062 }, 00:28:10.062 "cntlid": 5, 00:28:10.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:10.062 "listen_address": { 00:28:10.062 "adrfam": "IPv4", 00:28:10.062 "traddr": "10.0.0.2", 00:28:10.062 "trsvcid": "4420", 00:28:10.062 "trtype": "TCP" 00:28:10.062 }, 00:28:10.062 "peer_address": { 00:28:10.062 "adrfam": "IPv4", 00:28:10.062 "traddr": "10.0.0.1", 00:28:10.062 "trsvcid": "34570", 00:28:10.062 "trtype": "TCP" 00:28:10.062 }, 00:28:10.062 "qid": 0, 00:28:10.062 "state": "enabled", 00:28:10.062 "thread": "nvmf_tgt_poll_group_000" 00:28:10.062 } 00:28:10.062 ]' 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:10.062 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:10.322 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:10.322 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:10.322 11:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:10.599 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:10.599 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:11.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:11.176 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:11.435 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:12.002 00:28:12.002 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:12.002 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:12.002 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:12.260 { 00:28:12.260 "auth": { 00:28:12.260 "dhgroup": "null", 00:28:12.260 "digest": "sha256", 00:28:12.260 "state": "completed" 00:28:12.260 }, 00:28:12.260 "cntlid": 7, 00:28:12.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:12.260 "listen_address": { 00:28:12.260 "adrfam": "IPv4", 00:28:12.260 "traddr": "10.0.0.2", 00:28:12.260 "trsvcid": "4420", 00:28:12.260 "trtype": "TCP" 00:28:12.260 }, 00:28:12.260 "peer_address": { 00:28:12.260 "adrfam": "IPv4", 00:28:12.260 "traddr": "10.0.0.1", 00:28:12.260 "trsvcid": "34596", 00:28:12.260 "trtype": "TCP" 00:28:12.260 }, 00:28:12.260 "qid": 0, 00:28:12.260 "state": "enabled", 00:28:12.260 "thread": "nvmf_tgt_poll_group_000" 00:28:12.260 } 00:28:12.260 ]' 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:12.260 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:12.519 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:12.519 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:12.519 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:12.777 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:12.777 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:13.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:13.712 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.971 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.318 00:28:14.318 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:14.318 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:14.318 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:14.580 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.580 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:14.580 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.580 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.580 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.580 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:14.580 { 00:28:14.580 "auth": { 00:28:14.580 "dhgroup": "ffdhe2048", 00:28:14.580 "digest": "sha256", 00:28:14.580 "state": "completed" 00:28:14.580 }, 00:28:14.580 "cntlid": 9, 00:28:14.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:14.580 "listen_address": { 00:28:14.580 "adrfam": "IPv4", 00:28:14.580 "traddr": "10.0.0.2", 00:28:14.580 "trsvcid": "4420", 00:28:14.580 "trtype": "TCP" 00:28:14.580 }, 00:28:14.580 "peer_address": { 00:28:14.580 "adrfam": "IPv4", 00:28:14.580 "traddr": "10.0.0.1", 00:28:14.580 "trsvcid": "34622", 00:28:14.580 "trtype": "TCP" 00:28:14.580 }, 00:28:14.580 "qid": 0, 00:28:14.580 "state": "enabled", 00:28:14.580 "thread": "nvmf_tgt_poll_group_000" 00:28:14.580 } 00:28:14.580 ]' 00:28:14.580 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:14.838 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:14.838 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:14.838 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:14.839 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:14.839 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:14.839 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:14.839 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:15.097 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:15.097 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:16.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:16.031 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.290 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.548 00:28:16.548 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:16.548 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:16.548 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:16.806 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.806 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:16.806 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.806 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.806 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.806 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:16.806 { 00:28:16.806 "auth": { 00:28:16.806 "dhgroup": "ffdhe2048", 00:28:16.806 "digest": "sha256", 00:28:16.806 "state": "completed" 00:28:16.806 }, 00:28:16.806 "cntlid": 11, 00:28:16.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:16.806 "listen_address": { 00:28:16.806 "adrfam": "IPv4", 00:28:16.806 "traddr": "10.0.0.2", 00:28:16.806 "trsvcid": "4420", 00:28:16.806 "trtype": "TCP" 00:28:16.806 }, 00:28:16.806 "peer_address": { 00:28:16.806 "adrfam": "IPv4", 00:28:16.806 "traddr": "10.0.0.1", 00:28:16.806 "trsvcid": "34638", 00:28:16.806 "trtype": "TCP" 00:28:16.806 }, 00:28:16.806 "qid": 0, 00:28:16.806 "state": "enabled", 00:28:16.807 "thread": "nvmf_tgt_poll_group_000" 00:28:16.807 } 00:28:16.807 ]' 00:28:16.807 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:17.064 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:17.064 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:17.064 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:17.064 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:17.064 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:17.064 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:17.064 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:17.322 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:17.322 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:17.885 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:17.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:17.885 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:17.885 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.885 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.885 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.885 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:17.886 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:17.886 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.451 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.709 00:28:18.709 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:18.709 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:18.709 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:18.967 { 00:28:18.967 "auth": { 00:28:18.967 "dhgroup": "ffdhe2048", 00:28:18.967 "digest": "sha256", 00:28:18.967 "state": "completed" 00:28:18.967 }, 00:28:18.967 "cntlid": 13, 00:28:18.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:18.967 "listen_address": { 00:28:18.967 "adrfam": "IPv4", 00:28:18.967 "traddr": "10.0.0.2", 00:28:18.967 "trsvcid": "4420", 00:28:18.967 "trtype": "TCP" 00:28:18.967 }, 00:28:18.967 "peer_address": { 00:28:18.967 "adrfam": "IPv4", 00:28:18.967 "traddr": "10.0.0.1", 00:28:18.967 "trsvcid": "53590", 00:28:18.967 "trtype": "TCP" 00:28:18.967 }, 00:28:18.967 "qid": 0, 00:28:18.967 "state": "enabled", 00:28:18.967 "thread": "nvmf_tgt_poll_group_000" 00:28:18.967 } 00:28:18.967 ]' 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:18.967 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:19.224 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:19.224 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:19.224 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:19.224 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:19.224 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:19.482 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:19.482 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:20.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:20.047 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:20.305 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:20.880 00:28:20.880 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:20.880 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:20.880 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:21.138 { 00:28:21.138 "auth": { 00:28:21.138 "dhgroup": "ffdhe2048", 00:28:21.138 "digest": "sha256", 00:28:21.138 "state": "completed" 00:28:21.138 }, 00:28:21.138 "cntlid": 15, 00:28:21.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:21.138 "listen_address": { 00:28:21.138 "adrfam": "IPv4", 00:28:21.138 "traddr": "10.0.0.2", 00:28:21.138 "trsvcid": "4420", 00:28:21.138 "trtype": "TCP" 00:28:21.138 }, 00:28:21.138 "peer_address": { 00:28:21.138 "adrfam": "IPv4", 00:28:21.138 "traddr": "10.0.0.1", 00:28:21.138 "trsvcid": "53622", 00:28:21.138 "trtype": "TCP" 00:28:21.138 }, 00:28:21.138 "qid": 0, 00:28:21.138 "state": "enabled", 00:28:21.138 "thread": "nvmf_tgt_poll_group_000" 00:28:21.138 } 00:28:21.138 ]' 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:21.138 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:21.396 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:21.396 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:21.396 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:21.695 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:21.695 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:22.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:22.285 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.544 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.111 00:28:23.111 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:23.111 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:23.111 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:23.370 { 00:28:23.370 "auth": { 00:28:23.370 "dhgroup": "ffdhe3072", 00:28:23.370 "digest": "sha256", 00:28:23.370 "state": "completed" 00:28:23.370 }, 00:28:23.370 "cntlid": 17, 00:28:23.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:23.370 "listen_address": { 00:28:23.370 "adrfam": "IPv4", 00:28:23.370 "traddr": "10.0.0.2", 00:28:23.370 "trsvcid": "4420", 00:28:23.370 "trtype": "TCP" 00:28:23.370 }, 00:28:23.370 "peer_address": { 00:28:23.370 "adrfam": "IPv4", 00:28:23.370 "traddr": "10.0.0.1", 00:28:23.370 "trsvcid": "53654", 00:28:23.370 "trtype": "TCP" 00:28:23.370 }, 00:28:23.370 "qid": 0, 00:28:23.370 "state": "enabled", 00:28:23.370 "thread": "nvmf_tgt_poll_group_000" 00:28:23.370 } 00:28:23.370 ]' 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:23.370 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:23.628 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:23.628 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:24.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:24.563 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.822 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.823 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.823 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.823 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.823 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.823 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.082 00:28:25.082 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:25.082 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:25.082 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:25.366 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.366 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:25.366 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.366 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:25.366 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.366 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:25.366 { 00:28:25.366 "auth": { 00:28:25.366 "dhgroup": "ffdhe3072", 00:28:25.366 "digest": "sha256", 00:28:25.366 "state": "completed" 00:28:25.366 }, 00:28:25.367 "cntlid": 19, 00:28:25.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:25.367 "listen_address": { 00:28:25.367 "adrfam": "IPv4", 00:28:25.367 "traddr": "10.0.0.2", 00:28:25.367 "trsvcid": "4420", 00:28:25.367 "trtype": "TCP" 00:28:25.367 }, 00:28:25.367 "peer_address": { 00:28:25.367 "adrfam": "IPv4", 00:28:25.367 "traddr": "10.0.0.1", 00:28:25.367 "trsvcid": "53674", 00:28:25.367 "trtype": "TCP" 00:28:25.367 }, 00:28:25.367 "qid": 0, 00:28:25.367 "state": "enabled", 00:28:25.367 "thread": "nvmf_tgt_poll_group_000" 00:28:25.367 } 00:28:25.367 ]' 00:28:25.367 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:25.367 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:25.367 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:25.673 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:25.673 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:25.673 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:25.673 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:25.673 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:25.931 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:25.931 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:26.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:26.498 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.756 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.324 00:28:27.324 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:27.324 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:27.324 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:27.584 { 00:28:27.584 "auth": { 00:28:27.584 "dhgroup": "ffdhe3072", 00:28:27.584 "digest": "sha256", 00:28:27.584 "state": "completed" 00:28:27.584 }, 00:28:27.584 "cntlid": 21, 00:28:27.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:27.584 "listen_address": { 00:28:27.584 "adrfam": "IPv4", 00:28:27.584 "traddr": "10.0.0.2", 00:28:27.584 "trsvcid": "4420", 00:28:27.584 "trtype": "TCP" 00:28:27.584 }, 00:28:27.584 "peer_address": { 00:28:27.584 "adrfam": "IPv4", 00:28:27.584 "traddr": "10.0.0.1", 00:28:27.584 "trsvcid": "53704", 00:28:27.584 "trtype": "TCP" 00:28:27.584 }, 00:28:27.584 "qid": 0, 00:28:27.584 "state": "enabled", 00:28:27.584 "thread": "nvmf_tgt_poll_group_000" 00:28:27.584 } 00:28:27.584 ]' 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:27.584 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:27.843 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:27.843 11:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:28.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:28.778 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:29.037 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:29.295 00:28:29.295 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:29.295 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:29.295 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:29.555 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.555 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:29.555 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.555 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.555 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.555 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:29.555 { 00:28:29.555 "auth": { 00:28:29.555 "dhgroup": "ffdhe3072", 00:28:29.555 "digest": "sha256", 00:28:29.555 "state": "completed" 00:28:29.555 }, 00:28:29.555 "cntlid": 23, 00:28:29.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:29.555 "listen_address": { 00:28:29.555 "adrfam": "IPv4", 00:28:29.555 "traddr": "10.0.0.2", 00:28:29.555 "trsvcid": "4420", 00:28:29.555 "trtype": "TCP" 00:28:29.555 }, 00:28:29.555 "peer_address": { 00:28:29.555 "adrfam": "IPv4", 00:28:29.555 "traddr": "10.0.0.1", 00:28:29.555 "trsvcid": "44558", 00:28:29.555 "trtype": "TCP" 00:28:29.555 }, 00:28:29.555 "qid": 0, 00:28:29.555 "state": "enabled", 00:28:29.555 "thread": "nvmf_tgt_poll_group_000" 00:28:29.555 } 00:28:29.555 ]' 00:28:29.555 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:29.814 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:29.814 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:29.814 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:29.814 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:29.814 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:29.814 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:29.814 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:30.073 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:30.073 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:31.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.008 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.265 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.523 00:28:31.523 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:31.523 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:31.523 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:31.781 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.781 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:31.781 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.781 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.781 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.781 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:31.781 { 00:28:31.781 "auth": { 00:28:31.781 "dhgroup": "ffdhe4096", 00:28:31.781 "digest": "sha256", 00:28:31.781 "state": "completed" 00:28:31.781 }, 00:28:31.781 "cntlid": 25, 00:28:31.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:31.781 "listen_address": { 00:28:31.781 "adrfam": "IPv4", 00:28:31.781 "traddr": "10.0.0.2", 00:28:31.781 "trsvcid": "4420", 00:28:31.781 "trtype": "TCP" 00:28:31.781 }, 00:28:31.781 "peer_address": { 00:28:31.781 "adrfam": "IPv4", 00:28:31.781 "traddr": "10.0.0.1", 00:28:31.781 "trsvcid": "44578", 00:28:31.781 "trtype": "TCP" 00:28:31.781 }, 00:28:31.781 "qid": 0, 00:28:31.781 "state": "enabled", 00:28:31.781 "thread": "nvmf_tgt_poll_group_000" 00:28:31.781 } 00:28:31.781 ]' 00:28:31.781 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:32.039 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:32.039 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:32.039 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:32.039 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:32.039 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:32.039 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:32.039 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:32.297 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:32.297 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:32.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:32.864 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:33.122 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:28:33.122 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:33.122 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.123 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.689 00:28:33.690 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:33.690 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:33.690 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:33.948 { 00:28:33.948 "auth": { 00:28:33.948 "dhgroup": "ffdhe4096", 00:28:33.948 "digest": "sha256", 00:28:33.948 "state": "completed" 00:28:33.948 }, 00:28:33.948 "cntlid": 27, 00:28:33.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:33.948 "listen_address": { 00:28:33.948 "adrfam": "IPv4", 00:28:33.948 "traddr": "10.0.0.2", 00:28:33.948 "trsvcid": "4420", 00:28:33.948 "trtype": "TCP" 00:28:33.948 }, 00:28:33.948 "peer_address": { 00:28:33.948 "adrfam": "IPv4", 00:28:33.948 "traddr": "10.0.0.1", 00:28:33.948 "trsvcid": "44610", 00:28:33.948 "trtype": "TCP" 00:28:33.948 }, 00:28:33.948 "qid": 0, 00:28:33.948 "state": "enabled", 00:28:33.948 "thread": "nvmf_tgt_poll_group_000" 00:28:33.948 } 00:28:33.948 ]' 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:33.948 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:33.949 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:33.949 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:33.949 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:33.949 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:33.949 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:33.949 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:34.207 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:34.207 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:35.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:35.145 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.404 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.662 00:28:35.662 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:35.662 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:35.662 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:35.921 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.921 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:35.921 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.921 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.921 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.921 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:35.921 { 00:28:35.921 "auth": { 00:28:35.921 "dhgroup": "ffdhe4096", 00:28:35.921 "digest": "sha256", 00:28:35.921 "state": "completed" 00:28:35.921 }, 00:28:35.921 "cntlid": 29, 00:28:35.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:35.921 "listen_address": { 00:28:35.921 "adrfam": "IPv4", 00:28:35.921 "traddr": "10.0.0.2", 00:28:35.921 "trsvcid": "4420", 00:28:35.921 "trtype": "TCP" 00:28:35.921 }, 00:28:35.921 "peer_address": { 00:28:35.921 "adrfam": "IPv4", 00:28:35.921 "traddr": "10.0.0.1", 00:28:35.921 "trsvcid": "44630", 00:28:35.921 "trtype": "TCP" 00:28:35.921 }, 00:28:35.921 "qid": 0, 00:28:35.921 "state": "enabled", 00:28:35.921 "thread": "nvmf_tgt_poll_group_000" 00:28:35.921 } 00:28:35.921 ]' 00:28:35.921 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:35.922 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:35.922 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:36.181 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:36.181 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:36.181 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:36.181 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:36.181 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:36.439 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:36.439 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:37.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:37.006 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:37.265 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:37.830 00:28:37.830 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:37.830 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:37.830 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:38.088 { 00:28:38.088 "auth": { 00:28:38.088 "dhgroup": "ffdhe4096", 00:28:38.088 "digest": "sha256", 00:28:38.088 "state": "completed" 00:28:38.088 }, 00:28:38.088 "cntlid": 31, 00:28:38.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:38.088 "listen_address": { 00:28:38.088 "adrfam": "IPv4", 00:28:38.088 "traddr": "10.0.0.2", 00:28:38.088 "trsvcid": "4420", 00:28:38.088 "trtype": "TCP" 00:28:38.088 }, 00:28:38.088 "peer_address": { 00:28:38.088 "adrfam": "IPv4", 00:28:38.088 "traddr": "10.0.0.1", 00:28:38.088 "trsvcid": "33048", 00:28:38.088 "trtype": "TCP" 00:28:38.088 }, 00:28:38.088 "qid": 0, 00:28:38.088 "state": "enabled", 00:28:38.088 "thread": "nvmf_tgt_poll_group_000" 00:28:38.088 } 00:28:38.088 ]' 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:38.088 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:38.346 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:38.346 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:38.346 11:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:38.603 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:38.603 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:39.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:39.168 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.426 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.991 00:28:39.991 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:39.991 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:39.991 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:40.249 { 00:28:40.249 "auth": { 00:28:40.249 "dhgroup": "ffdhe6144", 00:28:40.249 "digest": "sha256", 00:28:40.249 "state": "completed" 00:28:40.249 }, 00:28:40.249 "cntlid": 33, 00:28:40.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:40.249 "listen_address": { 00:28:40.249 "adrfam": "IPv4", 00:28:40.249 "traddr": "10.0.0.2", 00:28:40.249 "trsvcid": "4420", 00:28:40.249 "trtype": "TCP" 00:28:40.249 }, 00:28:40.249 "peer_address": { 00:28:40.249 "adrfam": "IPv4", 00:28:40.249 "traddr": "10.0.0.1", 00:28:40.249 "trsvcid": "33072", 00:28:40.249 "trtype": "TCP" 00:28:40.249 }, 00:28:40.249 "qid": 0, 00:28:40.249 "state": "enabled", 00:28:40.249 "thread": "nvmf_tgt_poll_group_000" 00:28:40.249 } 00:28:40.249 ]' 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:40.249 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:40.507 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:40.507 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:41.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:41.446 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.705 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.271 00:28:42.271 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:42.271 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:42.271 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:42.529 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.529 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:42.529 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.529 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.529 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.530 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:42.530 { 00:28:42.530 "auth": { 00:28:42.530 "dhgroup": "ffdhe6144", 00:28:42.530 "digest": "sha256", 00:28:42.530 "state": "completed" 00:28:42.530 }, 00:28:42.530 "cntlid": 35, 00:28:42.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:42.530 "listen_address": { 00:28:42.530 "adrfam": "IPv4", 00:28:42.530 "traddr": "10.0.0.2", 00:28:42.530 "trsvcid": "4420", 00:28:42.530 "trtype": "TCP" 00:28:42.530 }, 00:28:42.530 "peer_address": { 00:28:42.530 "adrfam": "IPv4", 00:28:42.530 "traddr": "10.0.0.1", 00:28:42.530 "trsvcid": "33082", 00:28:42.530 "trtype": "TCP" 00:28:42.530 }, 00:28:42.530 "qid": 0, 00:28:42.530 "state": "enabled", 00:28:42.530 "thread": "nvmf_tgt_poll_group_000" 00:28:42.530 } 00:28:42.530 ]' 00:28:42.530 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:42.530 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:42.530 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:42.530 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:42.530 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:42.788 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:42.788 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:42.788 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:43.046 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:43.046 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:43.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:43.612 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.871 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.438 00:28:44.438 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:44.438 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:44.438 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:44.722 { 00:28:44.722 "auth": { 00:28:44.722 "dhgroup": "ffdhe6144", 00:28:44.722 "digest": "sha256", 00:28:44.722 "state": "completed" 00:28:44.722 }, 00:28:44.722 "cntlid": 37, 00:28:44.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:44.722 "listen_address": { 00:28:44.722 "adrfam": "IPv4", 00:28:44.722 "traddr": "10.0.0.2", 00:28:44.722 "trsvcid": "4420", 00:28:44.722 "trtype": "TCP" 00:28:44.722 }, 00:28:44.722 "peer_address": { 00:28:44.722 "adrfam": "IPv4", 00:28:44.722 "traddr": "10.0.0.1", 00:28:44.722 "trsvcid": "33102", 00:28:44.722 "trtype": "TCP" 00:28:44.722 }, 00:28:44.722 "qid": 0, 00:28:44.722 "state": "enabled", 00:28:44.722 "thread": "nvmf_tgt_poll_group_000" 00:28:44.722 } 00:28:44.722 ]' 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:44.722 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:45.308 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:45.308 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:45.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.875 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:46.133 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:46.700 00:28:46.700 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:46.700 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:46.700 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:46.959 { 00:28:46.959 "auth": { 00:28:46.959 "dhgroup": "ffdhe6144", 00:28:46.959 "digest": "sha256", 00:28:46.959 "state": "completed" 00:28:46.959 }, 00:28:46.959 "cntlid": 39, 00:28:46.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:46.959 "listen_address": { 00:28:46.959 "adrfam": "IPv4", 00:28:46.959 "traddr": "10.0.0.2", 00:28:46.959 "trsvcid": "4420", 00:28:46.959 "trtype": "TCP" 00:28:46.959 }, 00:28:46.959 "peer_address": { 00:28:46.959 "adrfam": "IPv4", 00:28:46.959 "traddr": "10.0.0.1", 00:28:46.959 "trsvcid": "33126", 00:28:46.959 "trtype": "TCP" 00:28:46.959 }, 00:28:46.959 "qid": 0, 00:28:46.959 "state": "enabled", 00:28:46.959 "thread": "nvmf_tgt_poll_group_000" 00:28:46.959 } 00:28:46.959 ]' 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:46.959 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:47.525 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:47.525 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:48.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.093 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.351 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.922 00:28:48.922 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:48.922 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:48.922 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:49.181 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.181 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:49.181 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.181 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:49.181 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.181 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:49.181 { 00:28:49.181 "auth": { 00:28:49.181 "dhgroup": "ffdhe8192", 00:28:49.181 "digest": "sha256", 00:28:49.181 "state": "completed" 00:28:49.181 }, 00:28:49.181 "cntlid": 41, 00:28:49.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:49.181 "listen_address": { 00:28:49.181 "adrfam": "IPv4", 00:28:49.181 "traddr": "10.0.0.2", 00:28:49.181 "trsvcid": "4420", 00:28:49.181 "trtype": "TCP" 00:28:49.181 }, 00:28:49.181 "peer_address": { 00:28:49.181 "adrfam": "IPv4", 00:28:49.181 "traddr": "10.0.0.1", 00:28:49.181 "trsvcid": "43474", 00:28:49.181 "trtype": "TCP" 00:28:49.181 }, 00:28:49.181 "qid": 0, 00:28:49.181 "state": "enabled", 00:28:49.181 "thread": "nvmf_tgt_poll_group_000" 00:28:49.181 } 00:28:49.181 ]' 00:28:49.181 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:49.440 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:49.440 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:49.440 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:49.440 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:49.440 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:49.440 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:49.440 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:49.698 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:49.698 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:50.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.633 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.889 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.889 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.889 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.889 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.454 00:28:51.454 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:51.454 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:51.454 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:51.711 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.711 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:51.711 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.711 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:51.711 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.711 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:51.711 { 00:28:51.711 "auth": { 00:28:51.711 "dhgroup": "ffdhe8192", 00:28:51.711 "digest": "sha256", 00:28:51.711 "state": "completed" 00:28:51.711 }, 00:28:51.711 "cntlid": 43, 00:28:51.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:51.711 "listen_address": { 00:28:51.711 "adrfam": "IPv4", 00:28:51.711 "traddr": "10.0.0.2", 00:28:51.711 "trsvcid": "4420", 00:28:51.711 "trtype": "TCP" 00:28:51.711 }, 00:28:51.711 "peer_address": { 00:28:51.711 "adrfam": "IPv4", 00:28:51.711 "traddr": "10.0.0.1", 00:28:51.711 "trsvcid": "43484", 00:28:51.712 "trtype": "TCP" 00:28:51.712 }, 00:28:51.712 "qid": 0, 00:28:51.712 "state": "enabled", 00:28:51.712 "thread": "nvmf_tgt_poll_group_000" 00:28:51.712 } 00:28:51.712 ]' 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:51.712 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:51.970 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:51.970 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:52.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:52.905 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.163 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.164 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.730 00:28:53.730 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:53.730 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:53.730 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:53.989 { 00:28:53.989 "auth": { 00:28:53.989 "dhgroup": "ffdhe8192", 00:28:53.989 "digest": "sha256", 00:28:53.989 "state": "completed" 00:28:53.989 }, 00:28:53.989 "cntlid": 45, 00:28:53.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:53.989 "listen_address": { 00:28:53.989 "adrfam": "IPv4", 00:28:53.989 "traddr": "10.0.0.2", 00:28:53.989 "trsvcid": "4420", 00:28:53.989 "trtype": "TCP" 00:28:53.989 }, 00:28:53.989 "peer_address": { 00:28:53.989 "adrfam": "IPv4", 00:28:53.989 "traddr": "10.0.0.1", 00:28:53.989 "trsvcid": "43496", 00:28:53.989 "trtype": "TCP" 00:28:53.989 }, 00:28:53.989 "qid": 0, 00:28:53.989 "state": "enabled", 00:28:53.989 "thread": "nvmf_tgt_poll_group_000" 00:28:53.989 } 00:28:53.989 ]' 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:53.989 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:54.247 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:54.247 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:54.247 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:54.247 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:54.247 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:54.505 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:54.505 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:55.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:55.440 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:55.698 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:28:56.269 00:28:56.269 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:56.269 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:56.269 11:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:56.529 { 00:28:56.529 "auth": { 00:28:56.529 "dhgroup": "ffdhe8192", 00:28:56.529 "digest": "sha256", 00:28:56.529 "state": "completed" 00:28:56.529 }, 00:28:56.529 "cntlid": 47, 00:28:56.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:56.529 "listen_address": { 00:28:56.529 "adrfam": "IPv4", 00:28:56.529 "traddr": "10.0.0.2", 00:28:56.529 "trsvcid": "4420", 00:28:56.529 "trtype": "TCP" 00:28:56.529 }, 00:28:56.529 "peer_address": { 00:28:56.529 "adrfam": "IPv4", 00:28:56.529 "traddr": "10.0.0.1", 00:28:56.529 "trsvcid": "43524", 00:28:56.529 "trtype": "TCP" 00:28:56.529 }, 00:28:56.529 "qid": 0, 00:28:56.529 "state": "enabled", 00:28:56.529 "thread": "nvmf_tgt_poll_group_000" 00:28:56.529 } 00:28:56.529 ]' 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:56.529 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:56.789 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:56.789 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:56.789 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:57.047 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:57.048 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:57.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:57.614 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:57.615 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.873 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.874 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.874 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.874 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.874 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.874 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:58.132 00:28:58.132 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:28:58.132 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:58.132 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:28:58.701 { 00:28:58.701 "auth": { 00:28:58.701 "dhgroup": "null", 00:28:58.701 "digest": "sha384", 00:28:58.701 "state": "completed" 00:28:58.701 }, 00:28:58.701 "cntlid": 49, 00:28:58.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:28:58.701 "listen_address": { 00:28:58.701 "adrfam": "IPv4", 00:28:58.701 "traddr": "10.0.0.2", 00:28:58.701 "trsvcid": "4420", 00:28:58.701 "trtype": "TCP" 00:28:58.701 }, 00:28:58.701 "peer_address": { 00:28:58.701 "adrfam": "IPv4", 00:28:58.701 "traddr": "10.0.0.1", 00:28:58.701 "trsvcid": "43198", 00:28:58.701 "trtype": "TCP" 00:28:58.701 }, 00:28:58.701 "qid": 0, 00:28:58.701 "state": "enabled", 00:28:58.701 "thread": "nvmf_tgt_poll_group_000" 00:28:58.701 } 00:28:58.701 ]' 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:58.701 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:58.959 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:58.959 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:28:59.921 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:59.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:59.922 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:28:59.922 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.922 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.922 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.922 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:28:59.922 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:59.922 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.179 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.437 00:29:00.437 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:00.437 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:00.437 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:00.695 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.695 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:00.695 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.695 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.695 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.695 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:00.695 { 00:29:00.695 "auth": { 00:29:00.695 "dhgroup": "null", 00:29:00.695 "digest": "sha384", 00:29:00.695 "state": "completed" 00:29:00.695 }, 00:29:00.695 "cntlid": 51, 00:29:00.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:00.695 "listen_address": { 00:29:00.695 "adrfam": "IPv4", 00:29:00.695 "traddr": "10.0.0.2", 00:29:00.695 "trsvcid": "4420", 00:29:00.695 "trtype": "TCP" 00:29:00.695 }, 00:29:00.695 "peer_address": { 00:29:00.695 "adrfam": "IPv4", 00:29:00.695 "traddr": "10.0.0.1", 00:29:00.695 "trsvcid": "43214", 00:29:00.695 "trtype": "TCP" 00:29:00.695 }, 00:29:00.695 "qid": 0, 00:29:00.695 "state": "enabled", 00:29:00.695 "thread": "nvmf_tgt_poll_group_000" 00:29:00.695 } 00:29:00.695 ]' 00:29:00.695 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:00.955 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:00.955 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:00.955 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:00.955 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:00.955 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:00.955 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:00.955 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:01.213 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:01.213 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:02.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:02.166 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.426 11:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.684 00:29:02.684 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:02.684 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:02.684 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:02.941 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.942 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:02.942 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.942 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:03.199 { 00:29:03.199 "auth": { 00:29:03.199 "dhgroup": "null", 00:29:03.199 "digest": "sha384", 00:29:03.199 "state": "completed" 00:29:03.199 }, 00:29:03.199 "cntlid": 53, 00:29:03.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:03.199 "listen_address": { 00:29:03.199 "adrfam": "IPv4", 00:29:03.199 "traddr": "10.0.0.2", 00:29:03.199 "trsvcid": "4420", 00:29:03.199 "trtype": "TCP" 00:29:03.199 }, 00:29:03.199 "peer_address": { 00:29:03.199 "adrfam": "IPv4", 00:29:03.199 "traddr": "10.0.0.1", 00:29:03.199 "trsvcid": "43228", 00:29:03.199 "trtype": "TCP" 00:29:03.199 }, 00:29:03.199 "qid": 0, 00:29:03.199 "state": "enabled", 00:29:03.199 "thread": "nvmf_tgt_poll_group_000" 00:29:03.199 } 00:29:03.199 ]' 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:03.199 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:03.459 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:03.459 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:04.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:04.396 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:29:04.654 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.655 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.655 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.655 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:04.655 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:04.655 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:04.913 00:29:04.913 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:04.914 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:04.914 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:05.171 { 00:29:05.171 "auth": { 00:29:05.171 "dhgroup": "null", 00:29:05.171 "digest": "sha384", 00:29:05.171 "state": "completed" 00:29:05.171 }, 00:29:05.171 "cntlid": 55, 00:29:05.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:05.171 "listen_address": { 00:29:05.171 "adrfam": "IPv4", 00:29:05.171 "traddr": "10.0.0.2", 00:29:05.171 "trsvcid": "4420", 00:29:05.171 "trtype": "TCP" 00:29:05.171 }, 00:29:05.171 "peer_address": { 00:29:05.171 "adrfam": "IPv4", 00:29:05.171 "traddr": "10.0.0.1", 00:29:05.171 "trsvcid": "43252", 00:29:05.171 "trtype": "TCP" 00:29:05.171 }, 00:29:05.171 "qid": 0, 00:29:05.171 "state": "enabled", 00:29:05.171 "thread": "nvmf_tgt_poll_group_000" 00:29:05.171 } 00:29:05.171 ]' 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:05.171 11:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:05.738 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:05.738 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:06.313 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:06.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:06.313 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:06.314 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.314 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:06.314 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.314 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.314 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:06.314 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:06.314 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.572 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.227 00:29:07.227 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:07.227 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:07.227 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:07.486 { 00:29:07.486 "auth": { 00:29:07.486 "dhgroup": "ffdhe2048", 00:29:07.486 "digest": "sha384", 00:29:07.486 "state": "completed" 00:29:07.486 }, 00:29:07.486 "cntlid": 57, 00:29:07.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:07.486 "listen_address": { 00:29:07.486 "adrfam": "IPv4", 00:29:07.486 "traddr": "10.0.0.2", 00:29:07.486 "trsvcid": "4420", 00:29:07.486 "trtype": "TCP" 00:29:07.486 }, 00:29:07.486 "peer_address": { 00:29:07.486 "adrfam": "IPv4", 00:29:07.486 "traddr": "10.0.0.1", 00:29:07.486 "trsvcid": "43276", 00:29:07.486 "trtype": "TCP" 00:29:07.486 }, 00:29:07.486 "qid": 0, 00:29:07.486 "state": "enabled", 00:29:07.486 "thread": "nvmf_tgt_poll_group_000" 00:29:07.486 } 00:29:07.486 ]' 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:07.486 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:07.742 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:07.742 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:08.307 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:08.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:08.566 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:08.566 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.566 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:08.566 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.566 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:08.566 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.566 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.824 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.081 00:29:09.081 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:09.081 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:09.081 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:09.339 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.339 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:09.339 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.339 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.339 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.339 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:09.339 { 00:29:09.339 "auth": { 00:29:09.339 "dhgroup": "ffdhe2048", 00:29:09.339 "digest": "sha384", 00:29:09.339 "state": "completed" 00:29:09.339 }, 00:29:09.339 "cntlid": 59, 00:29:09.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:09.339 "listen_address": { 00:29:09.339 "adrfam": "IPv4", 00:29:09.339 "traddr": "10.0.0.2", 00:29:09.339 "trsvcid": "4420", 00:29:09.339 "trtype": "TCP" 00:29:09.339 }, 00:29:09.339 "peer_address": { 00:29:09.339 "adrfam": "IPv4", 00:29:09.339 "traddr": "10.0.0.1", 00:29:09.339 "trsvcid": "37836", 00:29:09.339 "trtype": "TCP" 00:29:09.339 }, 00:29:09.339 "qid": 0, 00:29:09.339 "state": "enabled", 00:29:09.339 "thread": "nvmf_tgt_poll_group_000" 00:29:09.339 } 00:29:09.339 ]' 00:29:09.339 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:09.598 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:09.598 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:09.598 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:09.598 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:09.598 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:09.598 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:09.598 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:09.855 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:09.856 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:10.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:10.787 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.045 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:11.303 00:29:11.303 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:11.303 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:11.303 11:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:11.869 { 00:29:11.869 "auth": { 00:29:11.869 "dhgroup": "ffdhe2048", 00:29:11.869 "digest": "sha384", 00:29:11.869 "state": "completed" 00:29:11.869 }, 00:29:11.869 "cntlid": 61, 00:29:11.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:11.869 "listen_address": { 00:29:11.869 "adrfam": "IPv4", 00:29:11.869 "traddr": "10.0.0.2", 00:29:11.869 "trsvcid": "4420", 00:29:11.869 "trtype": "TCP" 00:29:11.869 }, 00:29:11.869 "peer_address": { 00:29:11.869 "adrfam": "IPv4", 00:29:11.869 "traddr": "10.0.0.1", 00:29:11.869 "trsvcid": "37866", 00:29:11.869 "trtype": "TCP" 00:29:11.869 }, 00:29:11.869 "qid": 0, 00:29:11.869 "state": "enabled", 00:29:11.869 "thread": "nvmf_tgt_poll_group_000" 00:29:11.869 } 00:29:11.869 ]' 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:11.869 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:12.127 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:12.127 11:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:13.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:13.063 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:13.321 11:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:13.888 00:29:13.888 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:13.888 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:13.888 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:14.146 { 00:29:14.146 "auth": { 00:29:14.146 "dhgroup": "ffdhe2048", 00:29:14.146 "digest": "sha384", 00:29:14.146 "state": "completed" 00:29:14.146 }, 00:29:14.146 "cntlid": 63, 00:29:14.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:14.146 "listen_address": { 00:29:14.146 "adrfam": "IPv4", 00:29:14.146 "traddr": "10.0.0.2", 00:29:14.146 "trsvcid": "4420", 00:29:14.146 "trtype": "TCP" 00:29:14.146 }, 00:29:14.146 "peer_address": { 00:29:14.146 "adrfam": "IPv4", 00:29:14.146 "traddr": "10.0.0.1", 00:29:14.146 "trsvcid": "37898", 00:29:14.146 "trtype": "TCP" 00:29:14.146 }, 00:29:14.146 "qid": 0, 00:29:14.146 "state": "enabled", 00:29:14.146 "thread": "nvmf_tgt_poll_group_000" 00:29:14.146 } 00:29:14.146 ]' 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:14.146 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:14.404 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:14.404 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:14.404 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:14.662 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:14.662 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:15.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:15.230 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:15.490 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:16.055 00:29:16.055 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:16.055 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:16.055 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:16.315 { 00:29:16.315 "auth": { 00:29:16.315 "dhgroup": "ffdhe3072", 00:29:16.315 "digest": "sha384", 00:29:16.315 "state": "completed" 00:29:16.315 }, 00:29:16.315 "cntlid": 65, 00:29:16.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:16.315 "listen_address": { 00:29:16.315 "adrfam": "IPv4", 00:29:16.315 "traddr": "10.0.0.2", 00:29:16.315 "trsvcid": "4420", 00:29:16.315 "trtype": "TCP" 00:29:16.315 }, 00:29:16.315 "peer_address": { 00:29:16.315 "adrfam": "IPv4", 00:29:16.315 "traddr": "10.0.0.1", 00:29:16.315 "trsvcid": "37936", 00:29:16.315 "trtype": "TCP" 00:29:16.315 }, 00:29:16.315 "qid": 0, 00:29:16.315 "state": "enabled", 00:29:16.315 "thread": "nvmf_tgt_poll_group_000" 00:29:16.315 } 00:29:16.315 ]' 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:16.315 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:16.881 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:16.881 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:17.448 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:17.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:17.448 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:17.448 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.448 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.448 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.448 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:17.449 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:17.449 11:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.708 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:17.992 00:29:18.252 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:18.252 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:18.252 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:18.513 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.513 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:18.513 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.513 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.513 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.513 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:18.513 { 00:29:18.513 "auth": { 00:29:18.513 "dhgroup": "ffdhe3072", 00:29:18.513 "digest": "sha384", 00:29:18.513 "state": "completed" 00:29:18.513 }, 00:29:18.513 "cntlid": 67, 00:29:18.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:18.513 "listen_address": { 00:29:18.513 "adrfam": "IPv4", 00:29:18.513 "traddr": "10.0.0.2", 00:29:18.513 "trsvcid": "4420", 00:29:18.513 "trtype": "TCP" 00:29:18.513 }, 00:29:18.513 "peer_address": { 00:29:18.514 "adrfam": "IPv4", 00:29:18.514 "traddr": "10.0.0.1", 00:29:18.514 "trsvcid": "37078", 00:29:18.514 "trtype": "TCP" 00:29:18.514 }, 00:29:18.514 "qid": 0, 00:29:18.514 "state": "enabled", 00:29:18.514 "thread": "nvmf_tgt_poll_group_000" 00:29:18.514 } 00:29:18.514 ]' 00:29:18.514 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:18.514 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:18.514 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:18.514 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:18.514 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:18.773 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:18.773 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:18.773 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:19.032 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:19.032 11:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:19.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:19.969 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.229 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:20.487 00:29:20.487 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:20.487 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:20.487 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:20.744 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.744 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:20.744 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.744 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.744 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.744 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:20.744 { 00:29:20.744 "auth": { 00:29:20.744 "dhgroup": "ffdhe3072", 00:29:20.744 "digest": "sha384", 00:29:20.744 "state": "completed" 00:29:20.744 }, 00:29:20.744 "cntlid": 69, 00:29:20.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:20.744 "listen_address": { 00:29:20.744 "adrfam": "IPv4", 00:29:20.744 "traddr": "10.0.0.2", 00:29:20.744 "trsvcid": "4420", 00:29:20.744 "trtype": "TCP" 00:29:20.744 }, 00:29:20.744 "peer_address": { 00:29:20.744 "adrfam": "IPv4", 00:29:20.744 "traddr": "10.0.0.1", 00:29:20.744 "trsvcid": "37100", 00:29:20.744 "trtype": "TCP" 00:29:20.744 }, 00:29:20.744 "qid": 0, 00:29:20.744 "state": "enabled", 00:29:20.744 "thread": "nvmf_tgt_poll_group_000" 00:29:20.744 } 00:29:20.744 ]' 00:29:20.744 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:21.098 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:21.098 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:21.098 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:21.098 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:21.098 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:21.098 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:21.098 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:21.356 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:21.356 11:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:22.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:22.291 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:22.549 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:22.550 11:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:22.808 00:29:22.808 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:22.808 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:22.808 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:23.372 { 00:29:23.372 "auth": { 00:29:23.372 "dhgroup": "ffdhe3072", 00:29:23.372 "digest": "sha384", 00:29:23.372 "state": "completed" 00:29:23.372 }, 00:29:23.372 "cntlid": 71, 00:29:23.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:23.372 "listen_address": { 00:29:23.372 "adrfam": "IPv4", 00:29:23.372 "traddr": "10.0.0.2", 00:29:23.372 "trsvcid": "4420", 00:29:23.372 "trtype": "TCP" 00:29:23.372 }, 00:29:23.372 "peer_address": { 00:29:23.372 "adrfam": "IPv4", 00:29:23.372 "traddr": "10.0.0.1", 00:29:23.372 "trsvcid": "37124", 00:29:23.372 "trtype": "TCP" 00:29:23.372 }, 00:29:23.372 "qid": 0, 00:29:23.372 "state": "enabled", 00:29:23.372 "thread": "nvmf_tgt_poll_group_000" 00:29:23.372 } 00:29:23.372 ]' 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:23.372 11:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:23.630 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:23.630 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:24.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:24.623 11:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:24.882 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:25.141 00:29:25.141 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:25.141 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:25.141 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:25.707 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.707 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:25.707 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.707 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.707 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.707 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:25.707 { 00:29:25.707 "auth": { 00:29:25.707 "dhgroup": "ffdhe4096", 00:29:25.707 "digest": "sha384", 00:29:25.707 "state": "completed" 00:29:25.707 }, 00:29:25.707 "cntlid": 73, 00:29:25.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:25.707 "listen_address": { 00:29:25.707 "adrfam": "IPv4", 00:29:25.707 "traddr": "10.0.0.2", 00:29:25.707 "trsvcid": "4420", 00:29:25.707 "trtype": "TCP" 00:29:25.707 }, 00:29:25.707 "peer_address": { 00:29:25.707 "adrfam": "IPv4", 00:29:25.707 "traddr": "10.0.0.1", 00:29:25.707 "trsvcid": "37160", 00:29:25.707 "trtype": "TCP" 00:29:25.707 }, 00:29:25.707 "qid": 0, 00:29:25.707 "state": "enabled", 00:29:25.707 "thread": "nvmf_tgt_poll_group_000" 00:29:25.707 } 00:29:25.707 ]' 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:25.708 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:25.966 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:25.966 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:26.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:26.901 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.160 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:27.418 00:29:27.418 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:27.418 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:27.418 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:27.986 { 00:29:27.986 "auth": { 00:29:27.986 "dhgroup": "ffdhe4096", 00:29:27.986 "digest": "sha384", 00:29:27.986 "state": "completed" 00:29:27.986 }, 00:29:27.986 "cntlid": 75, 00:29:27.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:27.986 "listen_address": { 00:29:27.986 "adrfam": "IPv4", 00:29:27.986 "traddr": "10.0.0.2", 00:29:27.986 "trsvcid": "4420", 00:29:27.986 "trtype": "TCP" 00:29:27.986 }, 00:29:27.986 "peer_address": { 00:29:27.986 "adrfam": "IPv4", 00:29:27.986 "traddr": "10.0.0.1", 00:29:27.986 "trsvcid": "57400", 00:29:27.986 "trtype": "TCP" 00:29:27.986 }, 00:29:27.986 "qid": 0, 00:29:27.986 "state": "enabled", 00:29:27.986 "thread": "nvmf_tgt_poll_group_000" 00:29:27.986 } 00:29:27.986 ]' 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:27.986 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:28.244 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:28.244 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:29.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.301 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.559 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.560 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.560 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:29.832 00:29:29.832 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:29.832 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:29.832 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:30.400 { 00:29:30.400 "auth": { 00:29:30.400 "dhgroup": "ffdhe4096", 00:29:30.400 "digest": "sha384", 00:29:30.400 "state": "completed" 00:29:30.400 }, 00:29:30.400 "cntlid": 77, 00:29:30.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:30.400 "listen_address": { 00:29:30.400 "adrfam": "IPv4", 00:29:30.400 "traddr": "10.0.0.2", 00:29:30.400 "trsvcid": "4420", 00:29:30.400 "trtype": "TCP" 00:29:30.400 }, 00:29:30.400 "peer_address": { 00:29:30.400 "adrfam": "IPv4", 00:29:30.400 "traddr": "10.0.0.1", 00:29:30.400 "trsvcid": "57424", 00:29:30.400 "trtype": "TCP" 00:29:30.400 }, 00:29:30.400 "qid": 0, 00:29:30.400 "state": "enabled", 00:29:30.400 "thread": "nvmf_tgt_poll_group_000" 00:29:30.400 } 00:29:30.400 ]' 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:30.400 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:30.657 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:30.657 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:31.591 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:31.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:31.591 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:31.591 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.591 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.591 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.591 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:31.591 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:31.591 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:31.849 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:32.106 00:29:32.106 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:32.106 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:32.106 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:32.363 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.363 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:32.363 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.363 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.363 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:32.639 { 00:29:32.639 "auth": { 00:29:32.639 "dhgroup": "ffdhe4096", 00:29:32.639 "digest": "sha384", 00:29:32.639 "state": "completed" 00:29:32.639 }, 00:29:32.639 "cntlid": 79, 00:29:32.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:32.639 "listen_address": { 00:29:32.639 "adrfam": "IPv4", 00:29:32.639 "traddr": "10.0.0.2", 00:29:32.639 "trsvcid": "4420", 00:29:32.639 "trtype": "TCP" 00:29:32.639 }, 00:29:32.639 "peer_address": { 00:29:32.639 "adrfam": "IPv4", 00:29:32.639 "traddr": "10.0.0.1", 00:29:32.639 "trsvcid": "57458", 00:29:32.639 "trtype": "TCP" 00:29:32.639 }, 00:29:32.639 "qid": 0, 00:29:32.639 "state": "enabled", 00:29:32.639 "thread": "nvmf_tgt_poll_group_000" 00:29:32.639 } 00:29:32.639 ]' 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:32.639 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:32.896 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:32.896 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:33.461 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:33.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:33.461 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:33.461 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.461 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:33.719 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.286 00:29:34.286 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:34.286 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:34.286 11:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:34.544 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.544 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:34.544 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.544 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.544 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.544 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:34.544 { 00:29:34.544 "auth": { 00:29:34.544 "dhgroup": "ffdhe6144", 00:29:34.545 "digest": "sha384", 00:29:34.545 "state": "completed" 00:29:34.545 }, 00:29:34.545 "cntlid": 81, 00:29:34.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:34.545 "listen_address": { 00:29:34.545 "adrfam": "IPv4", 00:29:34.545 "traddr": "10.0.0.2", 00:29:34.545 "trsvcid": "4420", 00:29:34.545 "trtype": "TCP" 00:29:34.545 }, 00:29:34.545 "peer_address": { 00:29:34.545 "adrfam": "IPv4", 00:29:34.545 "traddr": "10.0.0.1", 00:29:34.545 "trsvcid": "57486", 00:29:34.545 "trtype": "TCP" 00:29:34.545 }, 00:29:34.545 "qid": 0, 00:29:34.545 "state": "enabled", 00:29:34.545 "thread": "nvmf_tgt_poll_group_000" 00:29:34.545 } 00:29:34.545 ]' 00:29:34.545 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:34.545 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:34.545 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:34.803 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:34.803 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:34.803 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:34.803 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:34.803 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:35.062 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:35.062 11:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:35.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:35.997 11:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.566 00:29:36.566 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:36.566 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:36.566 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:36.824 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.824 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:36.824 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.824 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.824 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.824 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:36.824 { 00:29:36.824 "auth": { 00:29:36.824 "dhgroup": "ffdhe6144", 00:29:36.824 "digest": "sha384", 00:29:36.824 "state": "completed" 00:29:36.824 }, 00:29:36.824 "cntlid": 83, 00:29:36.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:36.824 "listen_address": { 00:29:36.824 "adrfam": "IPv4", 00:29:36.824 "traddr": "10.0.0.2", 00:29:36.824 "trsvcid": "4420", 00:29:36.824 "trtype": "TCP" 00:29:36.824 }, 00:29:36.824 "peer_address": { 00:29:36.824 "adrfam": "IPv4", 00:29:36.824 "traddr": "10.0.0.1", 00:29:36.824 "trsvcid": "57500", 00:29:36.824 "trtype": "TCP" 00:29:36.824 }, 00:29:36.824 "qid": 0, 00:29:36.824 "state": "enabled", 00:29:36.824 "thread": "nvmf_tgt_poll_group_000" 00:29:36.824 } 00:29:36.824 ]' 00:29:36.824 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:37.083 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:37.083 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:37.083 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:37.083 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:37.083 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:37.083 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:37.083 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:37.342 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:37.342 11:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:37.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:37.910 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.478 11:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.737 00:29:38.737 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:38.737 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:38.737 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:39.305 { 00:29:39.305 "auth": { 00:29:39.305 "dhgroup": "ffdhe6144", 00:29:39.305 "digest": "sha384", 00:29:39.305 "state": "completed" 00:29:39.305 }, 00:29:39.305 "cntlid": 85, 00:29:39.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:39.305 "listen_address": { 00:29:39.305 "adrfam": "IPv4", 00:29:39.305 "traddr": "10.0.0.2", 00:29:39.305 "trsvcid": "4420", 00:29:39.305 "trtype": "TCP" 00:29:39.305 }, 00:29:39.305 "peer_address": { 00:29:39.305 "adrfam": "IPv4", 00:29:39.305 "traddr": "10.0.0.1", 00:29:39.305 "trsvcid": "36484", 00:29:39.305 "trtype": "TCP" 00:29:39.305 }, 00:29:39.305 "qid": 0, 00:29:39.305 "state": "enabled", 00:29:39.305 "thread": "nvmf_tgt_poll_group_000" 00:29:39.305 } 00:29:39.305 ]' 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:39.305 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:39.565 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:39.565 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:40.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:40.501 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:40.759 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:41.323 00:29:41.323 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:41.323 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:41.323 11:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:41.579 { 00:29:41.579 "auth": { 00:29:41.579 "dhgroup": "ffdhe6144", 00:29:41.579 "digest": "sha384", 00:29:41.579 "state": "completed" 00:29:41.579 }, 00:29:41.579 "cntlid": 87, 00:29:41.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:41.579 "listen_address": { 00:29:41.579 "adrfam": "IPv4", 00:29:41.579 "traddr": "10.0.0.2", 00:29:41.579 "trsvcid": "4420", 00:29:41.579 "trtype": "TCP" 00:29:41.579 }, 00:29:41.579 "peer_address": { 00:29:41.579 "adrfam": "IPv4", 00:29:41.579 "traddr": "10.0.0.1", 00:29:41.579 "trsvcid": "36508", 00:29:41.579 "trtype": "TCP" 00:29:41.579 }, 00:29:41.579 "qid": 0, 00:29:41.579 "state": "enabled", 00:29:41.579 "thread": "nvmf_tgt_poll_group_000" 00:29:41.579 } 00:29:41.579 ]' 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:41.579 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:41.580 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:41.836 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:41.836 11:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:42.791 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:42.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:42.792 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.050 11:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.615 00:29:43.615 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:43.615 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:43.616 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:44.181 { 00:29:44.181 "auth": { 00:29:44.181 "dhgroup": "ffdhe8192", 00:29:44.181 "digest": "sha384", 00:29:44.181 "state": "completed" 00:29:44.181 }, 00:29:44.181 "cntlid": 89, 00:29:44.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:44.181 "listen_address": { 00:29:44.181 "adrfam": "IPv4", 00:29:44.181 "traddr": "10.0.0.2", 00:29:44.181 "trsvcid": "4420", 00:29:44.181 "trtype": "TCP" 00:29:44.181 }, 00:29:44.181 "peer_address": { 00:29:44.181 "adrfam": "IPv4", 00:29:44.181 "traddr": "10.0.0.1", 00:29:44.181 "trsvcid": "36538", 00:29:44.181 "trtype": "TCP" 00:29:44.181 }, 00:29:44.181 "qid": 0, 00:29:44.181 "state": "enabled", 00:29:44.181 "thread": "nvmf_tgt_poll_group_000" 00:29:44.181 } 00:29:44.181 ]' 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:44.181 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:44.466 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:44.466 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:45.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:45.399 11:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.399 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.656 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.656 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.656 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.656 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.225 00:29:46.225 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:46.225 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:46.225 11:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:46.484 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.484 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:46.484 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.484 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.484 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.484 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:46.484 { 00:29:46.484 "auth": { 00:29:46.484 "dhgroup": "ffdhe8192", 00:29:46.484 "digest": "sha384", 00:29:46.484 "state": "completed" 00:29:46.484 }, 00:29:46.484 "cntlid": 91, 00:29:46.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:46.484 "listen_address": { 00:29:46.484 "adrfam": "IPv4", 00:29:46.484 "traddr": "10.0.0.2", 00:29:46.484 "trsvcid": "4420", 00:29:46.484 "trtype": "TCP" 00:29:46.484 }, 00:29:46.484 "peer_address": { 00:29:46.484 "adrfam": "IPv4", 00:29:46.484 "traddr": "10.0.0.1", 00:29:46.484 "trsvcid": "36564", 00:29:46.484 "trtype": "TCP" 00:29:46.484 }, 00:29:46.484 "qid": 0, 00:29:46.484 "state": "enabled", 00:29:46.484 "thread": "nvmf_tgt_poll_group_000" 00:29:46.484 } 00:29:46.484 ]' 00:29:46.484 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:46.742 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:46.743 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:46.743 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:46.743 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:46.743 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:46.743 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:46.743 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:47.001 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:47.001 11:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:47.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:47.938 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.198 11:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.765 00:29:48.765 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:48.765 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:48.765 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:49.023 { 00:29:49.023 "auth": { 00:29:49.023 "dhgroup": "ffdhe8192", 00:29:49.023 "digest": "sha384", 00:29:49.023 "state": "completed" 00:29:49.023 }, 00:29:49.023 "cntlid": 93, 00:29:49.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:49.023 "listen_address": { 00:29:49.023 "adrfam": "IPv4", 00:29:49.023 "traddr": "10.0.0.2", 00:29:49.023 "trsvcid": "4420", 00:29:49.023 "trtype": "TCP" 00:29:49.023 }, 00:29:49.023 "peer_address": { 00:29:49.023 "adrfam": "IPv4", 00:29:49.023 "traddr": "10.0.0.1", 00:29:49.023 "trsvcid": "37122", 00:29:49.023 "trtype": "TCP" 00:29:49.023 }, 00:29:49.023 "qid": 0, 00:29:49.023 "state": "enabled", 00:29:49.023 "thread": "nvmf_tgt_poll_group_000" 00:29:49.023 } 00:29:49.023 ]' 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:49.023 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:49.281 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:49.281 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:49.281 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:49.281 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:49.281 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:49.539 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:49.539 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:50.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:50.105 11:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:50.670 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:51.235 00:29:51.235 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:51.235 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:51.235 11:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:51.493 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.493 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:51.493 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.493 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.493 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.493 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:51.493 { 00:29:51.493 "auth": { 00:29:51.493 "dhgroup": "ffdhe8192", 00:29:51.493 "digest": "sha384", 00:29:51.493 "state": "completed" 00:29:51.493 }, 00:29:51.493 "cntlid": 95, 00:29:51.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:51.493 "listen_address": { 00:29:51.493 "adrfam": "IPv4", 00:29:51.493 "traddr": "10.0.0.2", 00:29:51.493 "trsvcid": "4420", 00:29:51.493 "trtype": "TCP" 00:29:51.493 }, 00:29:51.493 "peer_address": { 00:29:51.493 "adrfam": "IPv4", 00:29:51.493 "traddr": "10.0.0.1", 00:29:51.493 "trsvcid": "37146", 00:29:51.493 "trtype": "TCP" 00:29:51.493 }, 00:29:51.493 "qid": 0, 00:29:51.493 "state": "enabled", 00:29:51.493 "thread": "nvmf_tgt_poll_group_000" 00:29:51.493 } 00:29:51.493 ]' 00:29:51.493 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:51.751 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:29:51.751 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:51.751 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:51.751 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:51.751 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:51.751 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:51.751 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:52.010 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:52.010 11:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:52.576 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:52.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:52.576 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:52.576 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.576 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.576 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.576 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:29:52.577 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:29:52.577 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:52.577 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:52.577 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:52.836 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:53.095 00:29:53.095 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:53.095 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:53.095 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:53.661 { 00:29:53.661 "auth": { 00:29:53.661 "dhgroup": "null", 00:29:53.661 "digest": "sha512", 00:29:53.661 "state": "completed" 00:29:53.661 }, 00:29:53.661 "cntlid": 97, 00:29:53.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:53.661 "listen_address": { 00:29:53.661 "adrfam": "IPv4", 00:29:53.661 "traddr": "10.0.0.2", 00:29:53.661 "trsvcid": "4420", 00:29:53.661 "trtype": "TCP" 00:29:53.661 }, 00:29:53.661 "peer_address": { 00:29:53.661 "adrfam": "IPv4", 00:29:53.661 "traddr": "10.0.0.1", 00:29:53.661 "trsvcid": "37174", 00:29:53.661 "trtype": "TCP" 00:29:53.661 }, 00:29:53.661 "qid": 0, 00:29:53.661 "state": "enabled", 00:29:53.661 "thread": "nvmf_tgt_poll_group_000" 00:29:53.661 } 00:29:53.661 ]' 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:53.661 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:53.921 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:53.921 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:29:54.489 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:54.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:54.490 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:54.490 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.490 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.490 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.490 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:54.490 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:54.490 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:54.748 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:55.317 00:29:55.317 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:55.317 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:55.317 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:55.577 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.577 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:55.577 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.577 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:55.577 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.577 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:55.577 { 00:29:55.577 "auth": { 00:29:55.577 "dhgroup": "null", 00:29:55.577 "digest": "sha512", 00:29:55.577 "state": "completed" 00:29:55.577 }, 00:29:55.577 "cntlid": 99, 00:29:55.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:55.577 "listen_address": { 00:29:55.577 "adrfam": "IPv4", 00:29:55.577 "traddr": "10.0.0.2", 00:29:55.577 "trsvcid": "4420", 00:29:55.577 "trtype": "TCP" 00:29:55.577 }, 00:29:55.577 "peer_address": { 00:29:55.577 "adrfam": "IPv4", 00:29:55.577 "traddr": "10.0.0.1", 00:29:55.577 "trsvcid": "37202", 00:29:55.577 "trtype": "TCP" 00:29:55.577 }, 00:29:55.577 "qid": 0, 00:29:55.577 "state": "enabled", 00:29:55.577 "thread": "nvmf_tgt_poll_group_000" 00:29:55.577 } 00:29:55.577 ]' 00:29:55.577 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:55.577 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:55.577 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:55.577 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:55.577 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:55.577 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:55.577 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:55.577 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:55.836 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:55.836 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:56.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:56.403 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:56.970 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:57.229 00:29:57.229 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:57.229 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:57.229 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:57.488 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.488 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:57.488 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.488 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:57.488 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.488 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:57.488 { 00:29:57.488 "auth": { 00:29:57.488 "dhgroup": "null", 00:29:57.488 "digest": "sha512", 00:29:57.488 "state": "completed" 00:29:57.488 }, 00:29:57.488 "cntlid": 101, 00:29:57.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:57.488 "listen_address": { 00:29:57.488 "adrfam": "IPv4", 00:29:57.488 "traddr": "10.0.0.2", 00:29:57.488 "trsvcid": "4420", 00:29:57.488 "trtype": "TCP" 00:29:57.488 }, 00:29:57.488 "peer_address": { 00:29:57.488 "adrfam": "IPv4", 00:29:57.488 "traddr": "10.0.0.1", 00:29:57.488 "trsvcid": "37230", 00:29:57.488 "trtype": "TCP" 00:29:57.488 }, 00:29:57.488 "qid": 0, 00:29:57.488 "state": "enabled", 00:29:57.488 "thread": "nvmf_tgt_poll_group_000" 00:29:57.488 } 00:29:57.488 ]' 00:29:57.488 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:57.488 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:57.488 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:57.488 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:57.488 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:57.488 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:57.488 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:57.488 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:58.055 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:58.056 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:58.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:58.623 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:58.882 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:29:59.204 00:29:59.204 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:29:59.204 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:59.204 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:29:59.478 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.478 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:59.478 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.478 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:59.478 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.478 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:29:59.478 { 00:29:59.478 "auth": { 00:29:59.478 "dhgroup": "null", 00:29:59.478 "digest": "sha512", 00:29:59.478 "state": "completed" 00:29:59.478 }, 00:29:59.478 "cntlid": 103, 00:29:59.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:29:59.478 "listen_address": { 00:29:59.478 "adrfam": "IPv4", 00:29:59.478 "traddr": "10.0.0.2", 00:29:59.478 "trsvcid": "4420", 00:29:59.478 "trtype": "TCP" 00:29:59.478 }, 00:29:59.478 "peer_address": { 00:29:59.478 "adrfam": "IPv4", 00:29:59.478 "traddr": "10.0.0.1", 00:29:59.478 "trsvcid": "51108", 00:29:59.478 "trtype": "TCP" 00:29:59.478 }, 00:29:59.478 "qid": 0, 00:29:59.478 "state": "enabled", 00:29:59.478 "thread": "nvmf_tgt_poll_group_000" 00:29:59.478 } 00:29:59.478 ]' 00:29:59.478 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:29:59.737 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:59.737 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:29:59.737 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:29:59.737 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:29:59.737 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:59.737 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:59.737 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:59.996 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:29:59.996 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:00.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:00.938 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:01.195 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:01.452 00:30:01.452 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:01.452 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:01.452 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:01.710 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.710 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:01.710 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.710 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:01.710 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.710 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:01.710 { 00:30:01.710 "auth": { 00:30:01.710 "dhgroup": "ffdhe2048", 00:30:01.710 "digest": "sha512", 00:30:01.710 "state": "completed" 00:30:01.710 }, 00:30:01.710 "cntlid": 105, 00:30:01.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:01.710 "listen_address": { 00:30:01.710 "adrfam": "IPv4", 00:30:01.710 "traddr": "10.0.0.2", 00:30:01.710 "trsvcid": "4420", 00:30:01.710 "trtype": "TCP" 00:30:01.710 }, 00:30:01.710 "peer_address": { 00:30:01.710 "adrfam": "IPv4", 00:30:01.710 "traddr": "10.0.0.1", 00:30:01.710 "trsvcid": "51138", 00:30:01.710 "trtype": "TCP" 00:30:01.710 }, 00:30:01.710 "qid": 0, 00:30:01.710 "state": "enabled", 00:30:01.710 "thread": "nvmf_tgt_poll_group_000" 00:30:01.710 } 00:30:01.710 ]' 00:30:01.710 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:01.967 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:01.968 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:01.968 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:01.968 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:01.968 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:01.968 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:01.968 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:02.225 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:02.225 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:03.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:03.160 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.419 11:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.677 00:30:03.677 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:03.677 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:03.677 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:03.936 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.936 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:03.936 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.936 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:03.936 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.936 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:03.936 { 00:30:03.936 "auth": { 00:30:03.936 "dhgroup": "ffdhe2048", 00:30:03.936 "digest": "sha512", 00:30:03.936 "state": "completed" 00:30:03.936 }, 00:30:03.936 "cntlid": 107, 00:30:03.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:03.936 "listen_address": { 00:30:03.936 "adrfam": "IPv4", 00:30:03.936 "traddr": "10.0.0.2", 00:30:03.936 "trsvcid": "4420", 00:30:03.936 "trtype": "TCP" 00:30:03.936 }, 00:30:03.936 "peer_address": { 00:30:03.936 "adrfam": "IPv4", 00:30:03.936 "traddr": "10.0.0.1", 00:30:03.936 "trsvcid": "51168", 00:30:03.936 "trtype": "TCP" 00:30:03.936 }, 00:30:03.936 "qid": 0, 00:30:03.936 "state": "enabled", 00:30:03.936 "thread": "nvmf_tgt_poll_group_000" 00:30:03.936 } 00:30:03.936 ]' 00:30:03.936 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:04.194 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:04.194 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:04.194 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:04.194 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:04.194 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:04.194 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:04.194 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:04.453 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:04.453 11:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:05.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:05.074 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.640 11:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.898 00:30:05.898 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:05.898 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:05.898 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:06.156 { 00:30:06.156 "auth": { 00:30:06.156 "dhgroup": "ffdhe2048", 00:30:06.156 "digest": "sha512", 00:30:06.156 "state": "completed" 00:30:06.156 }, 00:30:06.156 "cntlid": 109, 00:30:06.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:06.156 "listen_address": { 00:30:06.156 "adrfam": "IPv4", 00:30:06.156 "traddr": "10.0.0.2", 00:30:06.156 "trsvcid": "4420", 00:30:06.156 "trtype": "TCP" 00:30:06.156 }, 00:30:06.156 "peer_address": { 00:30:06.156 "adrfam": "IPv4", 00:30:06.156 "traddr": "10.0.0.1", 00:30:06.156 "trsvcid": "51210", 00:30:06.156 "trtype": "TCP" 00:30:06.156 }, 00:30:06.156 "qid": 0, 00:30:06.156 "state": "enabled", 00:30:06.156 "thread": "nvmf_tgt_poll_group_000" 00:30:06.156 } 00:30:06.156 ]' 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:06.156 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:06.415 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:06.415 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:06.415 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:06.674 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:06.674 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:07.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:07.241 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:07.808 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:08.066 00:30:08.066 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:08.066 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:08.066 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:08.324 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.324 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:08.324 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.324 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:08.325 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.325 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:08.325 { 00:30:08.325 "auth": { 00:30:08.325 "dhgroup": "ffdhe2048", 00:30:08.325 "digest": "sha512", 00:30:08.325 "state": "completed" 00:30:08.325 }, 00:30:08.325 "cntlid": 111, 00:30:08.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:08.325 "listen_address": { 00:30:08.325 "adrfam": "IPv4", 00:30:08.325 "traddr": "10.0.0.2", 00:30:08.325 "trsvcid": "4420", 00:30:08.325 "trtype": "TCP" 00:30:08.325 }, 00:30:08.325 "peer_address": { 00:30:08.325 "adrfam": "IPv4", 00:30:08.325 "traddr": "10.0.0.1", 00:30:08.325 "trsvcid": "49022", 00:30:08.325 "trtype": "TCP" 00:30:08.325 }, 00:30:08.325 "qid": 0, 00:30:08.325 "state": "enabled", 00:30:08.325 "thread": "nvmf_tgt_poll_group_000" 00:30:08.325 } 00:30:08.325 ]' 00:30:08.325 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:08.325 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:08.325 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:08.583 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:30:08.583 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:08.583 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:08.583 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:08.583 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:08.842 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:08.842 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:09.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:09.411 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.669 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.989 00:30:09.989 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:09.989 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:09.989 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:10.249 { 00:30:10.249 "auth": { 00:30:10.249 "dhgroup": "ffdhe3072", 00:30:10.249 "digest": "sha512", 00:30:10.249 "state": "completed" 00:30:10.249 }, 00:30:10.249 "cntlid": 113, 00:30:10.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:10.249 "listen_address": { 00:30:10.249 "adrfam": "IPv4", 00:30:10.249 "traddr": "10.0.0.2", 00:30:10.249 "trsvcid": "4420", 00:30:10.249 "trtype": "TCP" 00:30:10.249 }, 00:30:10.249 "peer_address": { 00:30:10.249 "adrfam": "IPv4", 00:30:10.249 "traddr": "10.0.0.1", 00:30:10.249 "trsvcid": "49040", 00:30:10.249 "trtype": "TCP" 00:30:10.249 }, 00:30:10.249 "qid": 0, 00:30:10.249 "state": "enabled", 00:30:10.249 "thread": "nvmf_tgt_poll_group_000" 00:30:10.249 } 00:30:10.249 ]' 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:10.249 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:10.508 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:10.508 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:10.508 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:10.766 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:10.766 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:11.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:11.331 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:11.589 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:30:11.589 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:11.589 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.847 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:12.106 00:30:12.106 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:12.106 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:12.106 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:12.365 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.365 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:12.365 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.365 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:12.365 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.365 11:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:12.365 { 00:30:12.365 "auth": { 00:30:12.365 "dhgroup": "ffdhe3072", 00:30:12.365 "digest": "sha512", 00:30:12.365 "state": "completed" 00:30:12.365 }, 00:30:12.365 "cntlid": 115, 00:30:12.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:12.365 "listen_address": { 00:30:12.365 "adrfam": "IPv4", 00:30:12.365 "traddr": "10.0.0.2", 00:30:12.365 "trsvcid": "4420", 00:30:12.365 "trtype": "TCP" 00:30:12.365 }, 00:30:12.365 "peer_address": { 00:30:12.365 "adrfam": "IPv4", 00:30:12.365 "traddr": "10.0.0.1", 00:30:12.365 "trsvcid": "49062", 00:30:12.365 "trtype": "TCP" 00:30:12.365 }, 00:30:12.365 "qid": 0, 00:30:12.365 "state": "enabled", 00:30:12.365 "thread": "nvmf_tgt_poll_group_000" 00:30:12.365 } 00:30:12.365 ]' 00:30:12.365 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:12.623 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:12.623 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:12.623 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:12.623 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:12.623 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:12.623 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:12.623 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:12.881 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:12.881 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:13.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:13.447 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.013 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.271 00:30:14.271 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:14.271 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:14.271 11:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:14.528 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.528 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:14.528 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.528 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:14.785 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.785 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:14.785 { 00:30:14.785 "auth": { 00:30:14.785 "dhgroup": "ffdhe3072", 00:30:14.785 "digest": "sha512", 00:30:14.785 "state": "completed" 00:30:14.785 }, 00:30:14.785 "cntlid": 117, 00:30:14.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:14.785 "listen_address": { 00:30:14.785 "adrfam": "IPv4", 00:30:14.786 "traddr": "10.0.0.2", 00:30:14.786 "trsvcid": "4420", 00:30:14.786 "trtype": "TCP" 00:30:14.786 }, 00:30:14.786 "peer_address": { 00:30:14.786 "adrfam": "IPv4", 00:30:14.786 "traddr": "10.0.0.1", 00:30:14.786 "trsvcid": "49100", 00:30:14.786 "trtype": "TCP" 00:30:14.786 }, 00:30:14.786 "qid": 0, 00:30:14.786 "state": "enabled", 00:30:14.786 "thread": "nvmf_tgt_poll_group_000" 00:30:14.786 } 00:30:14.786 ]' 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:14.786 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:15.043 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:15.043 11:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:15.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:15.979 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:16.238 11:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:16.496 00:30:16.496 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:16.496 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:16.496 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:16.754 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.754 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:16.754 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.754 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:16.754 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.754 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:16.754 { 00:30:16.754 "auth": { 00:30:16.754 "dhgroup": "ffdhe3072", 00:30:16.754 "digest": "sha512", 00:30:16.754 "state": "completed" 00:30:16.754 }, 00:30:16.754 "cntlid": 119, 00:30:16.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:16.754 "listen_address": { 00:30:16.754 "adrfam": "IPv4", 00:30:16.754 "traddr": "10.0.0.2", 00:30:16.754 "trsvcid": "4420", 00:30:16.754 "trtype": "TCP" 00:30:16.754 }, 00:30:16.755 "peer_address": { 00:30:16.755 "adrfam": "IPv4", 00:30:16.755 "traddr": "10.0.0.1", 00:30:16.755 "trsvcid": "49134", 00:30:16.755 "trtype": "TCP" 00:30:16.755 }, 00:30:16.755 "qid": 0, 00:30:16.755 "state": "enabled", 00:30:16.755 "thread": "nvmf_tgt_poll_group_000" 00:30:16.755 } 00:30:16.755 ]' 00:30:16.755 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:16.755 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:17.013 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:17.013 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:30:17.013 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:17.013 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:17.013 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:17.013 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:17.271 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:17.272 11:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:17.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:17.838 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:18.404 11:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:18.664 00:30:18.664 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:18.664 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:18.664 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:18.923 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.923 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:18.923 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.923 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.923 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.923 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:18.923 { 00:30:18.923 "auth": { 00:30:18.923 "dhgroup": "ffdhe4096", 00:30:18.923 "digest": "sha512", 00:30:18.923 "state": "completed" 00:30:18.923 }, 00:30:18.923 "cntlid": 121, 00:30:18.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:18.923 "listen_address": { 00:30:18.923 "adrfam": "IPv4", 00:30:18.923 "traddr": "10.0.0.2", 00:30:18.923 "trsvcid": "4420", 00:30:18.923 "trtype": "TCP" 00:30:18.923 }, 00:30:18.923 "peer_address": { 00:30:18.923 "adrfam": "IPv4", 00:30:18.923 "traddr": "10.0.0.1", 00:30:18.923 "trsvcid": "58062", 00:30:18.923 "trtype": "TCP" 00:30:18.923 }, 00:30:18.923 "qid": 0, 00:30:18.923 "state": "enabled", 00:30:18.923 "thread": "nvmf_tgt_poll_group_000" 00:30:18.923 } 00:30:18.923 ]' 00:30:18.923 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:19.216 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:19.216 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:19.216 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:19.216 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:19.216 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:19.216 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:19.216 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:19.474 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:19.474 11:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:20.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:20.042 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:20.611 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:20.874 00:30:20.874 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:20.874 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:20.874 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:21.131 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.131 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:21.131 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.131 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:21.389 { 00:30:21.389 "auth": { 00:30:21.389 "dhgroup": "ffdhe4096", 00:30:21.389 "digest": "sha512", 00:30:21.389 "state": "completed" 00:30:21.389 }, 00:30:21.389 "cntlid": 123, 00:30:21.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:21.389 "listen_address": { 00:30:21.389 "adrfam": "IPv4", 00:30:21.389 "traddr": "10.0.0.2", 00:30:21.389 "trsvcid": "4420", 00:30:21.389 "trtype": "TCP" 00:30:21.389 }, 00:30:21.389 "peer_address": { 00:30:21.389 "adrfam": "IPv4", 00:30:21.389 "traddr": "10.0.0.1", 00:30:21.389 "trsvcid": "58100", 00:30:21.389 "trtype": "TCP" 00:30:21.389 }, 00:30:21.389 "qid": 0, 00:30:21.389 "state": "enabled", 00:30:21.389 "thread": "nvmf_tgt_poll_group_000" 00:30:21.389 } 00:30:21.389 ]' 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:21.389 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:21.647 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:21.647 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:22.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:22.584 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.584 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.585 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:22.585 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:23.150 00:30:23.150 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:23.150 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:23.150 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:23.407 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.407 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:23.407 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.407 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:23.407 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.407 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:23.407 { 00:30:23.407 "auth": { 00:30:23.407 "dhgroup": "ffdhe4096", 00:30:23.407 "digest": "sha512", 00:30:23.407 "state": "completed" 00:30:23.407 }, 00:30:23.407 "cntlid": 125, 00:30:23.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:23.407 "listen_address": { 00:30:23.407 "adrfam": "IPv4", 00:30:23.407 "traddr": "10.0.0.2", 00:30:23.407 "trsvcid": "4420", 00:30:23.407 "trtype": "TCP" 00:30:23.407 }, 00:30:23.407 "peer_address": { 00:30:23.407 "adrfam": "IPv4", 00:30:23.407 "traddr": "10.0.0.1", 00:30:23.407 "trsvcid": "58128", 00:30:23.407 "trtype": "TCP" 00:30:23.407 }, 00:30:23.407 "qid": 0, 00:30:23.407 "state": "enabled", 00:30:23.407 "thread": "nvmf_tgt_poll_group_000" 00:30:23.407 } 00:30:23.407 ]' 00:30:23.407 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:23.407 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:23.407 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:23.664 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:23.664 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:23.664 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:23.664 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:23.664 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:23.923 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:23.923 11:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:24.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:24.858 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:25.116 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:25.373 00:30:25.373 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:25.373 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:25.373 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:25.631 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.631 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:25.631 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.631 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:25.631 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.631 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:25.631 { 00:30:25.631 "auth": { 00:30:25.631 "dhgroup": "ffdhe4096", 00:30:25.631 "digest": "sha512", 00:30:25.631 "state": "completed" 00:30:25.631 }, 00:30:25.631 "cntlid": 127, 00:30:25.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:25.631 "listen_address": { 00:30:25.632 "adrfam": "IPv4", 00:30:25.632 "traddr": "10.0.0.2", 00:30:25.632 "trsvcid": "4420", 00:30:25.632 "trtype": "TCP" 00:30:25.632 }, 00:30:25.632 "peer_address": { 00:30:25.632 "adrfam": "IPv4", 00:30:25.632 "traddr": "10.0.0.1", 00:30:25.632 "trsvcid": "58154", 00:30:25.632 "trtype": "TCP" 00:30:25.632 }, 00:30:25.632 "qid": 0, 00:30:25.632 "state": "enabled", 00:30:25.632 "thread": "nvmf_tgt_poll_group_000" 00:30:25.632 } 00:30:25.632 ]' 00:30:25.632 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:25.889 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:25.889 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:25.889 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:30:25.889 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:25.889 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:25.889 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:25.889 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:26.146 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:26.146 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:26.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:26.714 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.972 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:27.230 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.230 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:27.230 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:27.230 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:27.490 00:30:27.748 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:27.748 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:27.748 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:28.006 { 00:30:28.006 "auth": { 00:30:28.006 "dhgroup": "ffdhe6144", 00:30:28.006 "digest": "sha512", 00:30:28.006 "state": "completed" 00:30:28.006 }, 00:30:28.006 "cntlid": 129, 00:30:28.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:28.006 "listen_address": { 00:30:28.006 "adrfam": "IPv4", 00:30:28.006 "traddr": "10.0.0.2", 00:30:28.006 "trsvcid": "4420", 00:30:28.006 "trtype": "TCP" 00:30:28.006 }, 00:30:28.006 "peer_address": { 00:30:28.006 "adrfam": "IPv4", 00:30:28.006 "traddr": "10.0.0.1", 00:30:28.006 "trsvcid": "46982", 00:30:28.006 "trtype": "TCP" 00:30:28.006 }, 00:30:28.006 "qid": 0, 00:30:28.006 "state": "enabled", 00:30:28.006 "thread": "nvmf_tgt_poll_group_000" 00:30:28.006 } 00:30:28.006 ]' 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:28.006 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:28.573 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:28.573 11:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:29.139 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:29.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:29.139 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:29.139 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.139 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.140 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.140 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:29.140 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:29.140 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:29.398 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:29.965 00:30:29.965 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:29.965 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:29.965 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:30.224 { 00:30:30.224 "auth": { 00:30:30.224 "dhgroup": "ffdhe6144", 00:30:30.224 "digest": "sha512", 00:30:30.224 "state": "completed" 00:30:30.224 }, 00:30:30.224 "cntlid": 131, 00:30:30.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:30.224 "listen_address": { 00:30:30.224 "adrfam": "IPv4", 00:30:30.224 "traddr": "10.0.0.2", 00:30:30.224 "trsvcid": "4420", 00:30:30.224 "trtype": "TCP" 00:30:30.224 }, 00:30:30.224 "peer_address": { 00:30:30.224 "adrfam": "IPv4", 00:30:30.224 "traddr": "10.0.0.1", 00:30:30.224 "trsvcid": "47016", 00:30:30.224 "trtype": "TCP" 00:30:30.224 }, 00:30:30.224 "qid": 0, 00:30:30.224 "state": "enabled", 00:30:30.224 "thread": "nvmf_tgt_poll_group_000" 00:30:30.224 } 00:30:30.224 ]' 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:30.224 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:30.484 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:30.484 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:30.484 11:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:30.484 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:30.484 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:31.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.419 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:31.679 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.248 00:30:32.248 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:32.248 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:32.248 11:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:32.507 { 00:30:32.507 "auth": { 00:30:32.507 "dhgroup": "ffdhe6144", 00:30:32.507 "digest": "sha512", 00:30:32.507 "state": "completed" 00:30:32.507 }, 00:30:32.507 "cntlid": 133, 00:30:32.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:32.507 "listen_address": { 00:30:32.507 "adrfam": "IPv4", 00:30:32.507 "traddr": "10.0.0.2", 00:30:32.507 "trsvcid": "4420", 00:30:32.507 "trtype": "TCP" 00:30:32.507 }, 00:30:32.507 "peer_address": { 00:30:32.507 "adrfam": "IPv4", 00:30:32.507 "traddr": "10.0.0.1", 00:30:32.507 "trsvcid": "47052", 00:30:32.507 "trtype": "TCP" 00:30:32.507 }, 00:30:32.507 "qid": 0, 00:30:32.507 "state": "enabled", 00:30:32.507 "thread": "nvmf_tgt_poll_group_000" 00:30:32.507 } 00:30:32.507 ]' 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:32.507 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:32.766 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:32.766 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:32.766 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:33.025 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:33.026 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:33.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:33.593 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:34.161 11:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:34.420 00:30:34.679 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:34.679 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:34.679 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:34.938 { 00:30:34.938 "auth": { 00:30:34.938 "dhgroup": "ffdhe6144", 00:30:34.938 "digest": "sha512", 00:30:34.938 "state": "completed" 00:30:34.938 }, 00:30:34.938 "cntlid": 135, 00:30:34.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:34.938 "listen_address": { 00:30:34.938 "adrfam": "IPv4", 00:30:34.938 "traddr": "10.0.0.2", 00:30:34.938 "trsvcid": "4420", 00:30:34.938 "trtype": "TCP" 00:30:34.938 }, 00:30:34.938 "peer_address": { 00:30:34.938 "adrfam": "IPv4", 00:30:34.938 "traddr": "10.0.0.1", 00:30:34.938 "trsvcid": "47084", 00:30:34.938 "trtype": "TCP" 00:30:34.938 }, 00:30:34.938 "qid": 0, 00:30:34.938 "state": "enabled", 00:30:34.938 "thread": "nvmf_tgt_poll_group_000" 00:30:34.938 } 00:30:34.938 ]' 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:34.938 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:35.505 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:35.505 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:36.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:36.074 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:36.332 11:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:36.973 00:30:37.231 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:37.231 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:37.231 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:37.490 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:37.490 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:37.490 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.490 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:37.490 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.490 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:37.490 { 00:30:37.490 "auth": { 00:30:37.490 "dhgroup": "ffdhe8192", 00:30:37.490 "digest": "sha512", 00:30:37.490 "state": "completed" 00:30:37.490 }, 00:30:37.490 "cntlid": 137, 00:30:37.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:37.490 "listen_address": { 00:30:37.490 "adrfam": "IPv4", 00:30:37.490 "traddr": "10.0.0.2", 00:30:37.490 "trsvcid": "4420", 00:30:37.490 "trtype": "TCP" 00:30:37.490 }, 00:30:37.490 "peer_address": { 00:30:37.490 "adrfam": "IPv4", 00:30:37.490 "traddr": "10.0.0.1", 00:30:37.490 "trsvcid": "47116", 00:30:37.490 "trtype": "TCP" 00:30:37.490 }, 00:30:37.490 "qid": 0, 00:30:37.490 "state": "enabled", 00:30:37.490 "thread": "nvmf_tgt_poll_group_000" 00:30:37.490 } 00:30:37.490 ]' 00:30:37.490 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:37.490 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:37.490 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:37.490 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:37.490 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:37.490 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:37.490 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:37.490 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:37.748 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:37.748 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:38.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:38.683 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:38.943 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:39.509 00:30:39.509 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:39.509 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:39.509 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:39.767 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:39.767 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:39.767 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.767 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:39.767 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.767 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:39.767 { 00:30:39.767 "auth": { 00:30:39.767 "dhgroup": "ffdhe8192", 00:30:39.767 "digest": "sha512", 00:30:39.767 "state": "completed" 00:30:39.767 }, 00:30:39.767 "cntlid": 139, 00:30:39.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:39.767 "listen_address": { 00:30:39.767 "adrfam": "IPv4", 00:30:39.767 "traddr": "10.0.0.2", 00:30:39.767 "trsvcid": "4420", 00:30:39.767 "trtype": "TCP" 00:30:39.767 }, 00:30:39.767 "peer_address": { 00:30:39.767 "adrfam": "IPv4", 00:30:39.767 "traddr": "10.0.0.1", 00:30:39.767 "trsvcid": "44684", 00:30:39.767 "trtype": "TCP" 00:30:39.767 }, 00:30:39.767 "qid": 0, 00:30:39.767 "state": "enabled", 00:30:39.767 "thread": "nvmf_tgt_poll_group_000" 00:30:39.767 } 00:30:39.767 ]' 00:30:39.767 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:40.026 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:40.026 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:40.026 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:40.026 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:40.026 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:40.026 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:40.026 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:40.285 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:40.285 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: --dhchap-ctrl-secret DHHC-1:02:Y2EyNGRiMmI3Zjk1YWVhNGQzZTQyZWQ4Y2EzYjhjODE0ZjBhMTdmNTVkYmY0Njc04VHHgg==: 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:41.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:41.219 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:41.220 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:42.153 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:42.153 { 00:30:42.153 "auth": { 00:30:42.153 "dhgroup": "ffdhe8192", 00:30:42.153 "digest": "sha512", 00:30:42.153 "state": "completed" 00:30:42.153 }, 00:30:42.153 "cntlid": 141, 00:30:42.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:42.153 "listen_address": { 00:30:42.153 "adrfam": "IPv4", 00:30:42.153 "traddr": "10.0.0.2", 00:30:42.153 "trsvcid": "4420", 00:30:42.153 "trtype": "TCP" 00:30:42.153 }, 00:30:42.153 "peer_address": { 00:30:42.153 "adrfam": "IPv4", 00:30:42.153 "traddr": "10.0.0.1", 00:30:42.153 "trsvcid": "44702", 00:30:42.153 "trtype": "TCP" 00:30:42.153 }, 00:30:42.153 "qid": 0, 00:30:42.153 "state": "enabled", 00:30:42.153 "thread": "nvmf_tgt_poll_group_000" 00:30:42.153 } 00:30:42.153 ]' 00:30:42.153 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:42.411 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:42.411 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:42.411 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:42.411 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:42.411 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:42.411 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:42.411 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:42.670 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:42.670 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:01:MjI3Y2YyYTU0YzU4ZTNhMDYyZDgwMTc2MDJmNTgxYWWY3LZ5: 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:43.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:43.234 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:43.492 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:44.094 00:30:44.094 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:44.094 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:44.094 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:44.353 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:44.353 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:44.353 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.353 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:44.353 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:44.354 { 00:30:44.354 "auth": { 00:30:44.354 "dhgroup": "ffdhe8192", 00:30:44.354 "digest": "sha512", 00:30:44.354 "state": "completed" 00:30:44.354 }, 00:30:44.354 "cntlid": 143, 00:30:44.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:44.354 "listen_address": { 00:30:44.354 "adrfam": "IPv4", 00:30:44.354 "traddr": "10.0.0.2", 00:30:44.354 "trsvcid": "4420", 00:30:44.354 "trtype": "TCP" 00:30:44.354 }, 00:30:44.354 "peer_address": { 00:30:44.354 "adrfam": "IPv4", 00:30:44.354 "traddr": "10.0.0.1", 00:30:44.354 "trsvcid": "44732", 00:30:44.354 "trtype": "TCP" 00:30:44.354 }, 00:30:44.354 "qid": 0, 00:30:44.354 "state": "enabled", 00:30:44.354 "thread": "nvmf_tgt_poll_group_000" 00:30:44.354 } 00:30:44.354 ]' 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:44.354 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:44.612 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:44.612 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:45.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:45.546 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:45.804 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:46.372 00:30:46.372 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:46.372 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:46.372 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:46.656 { 00:30:46.656 "auth": { 00:30:46.656 "dhgroup": "ffdhe8192", 00:30:46.656 "digest": "sha512", 00:30:46.656 "state": "completed" 00:30:46.656 }, 00:30:46.656 "cntlid": 145, 00:30:46.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:46.656 "listen_address": { 00:30:46.656 "adrfam": "IPv4", 00:30:46.656 "traddr": "10.0.0.2", 00:30:46.656 "trsvcid": "4420", 00:30:46.656 "trtype": "TCP" 00:30:46.656 }, 00:30:46.656 "peer_address": { 00:30:46.656 "adrfam": "IPv4", 00:30:46.656 "traddr": "10.0.0.1", 00:30:46.656 "trsvcid": "44760", 00:30:46.656 "trtype": "TCP" 00:30:46.656 }, 00:30:46.656 "qid": 0, 00:30:46.656 "state": "enabled", 00:30:46.656 "thread": "nvmf_tgt_poll_group_000" 00:30:46.656 } 00:30:46.656 ]' 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:46.656 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:46.915 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:46.915 11:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:00:Y2IyNTBhNTVkODMzNDgzYWEyMjA4Njk2NDdhMzVhNmMxYzFhZWNjODNkOGM1ZWZhYECUcg==: --dhchap-ctrl-secret DHHC-1:03:MWIzNTMyMjc5MjczNTE4YjQ0Njc2ZjIyYWY1Y2U0YzZkNWE2NjYyNmNhODFiZWMyNTk3YzBmNGVkYTM2NDg2ZplMxsw=: 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:47.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:30:47.851 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:30:48.418 2024/12/05 11:14:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:48.418 request: 00:30:48.418 { 00:30:48.418 "method": "bdev_nvme_attach_controller", 00:30:48.418 "params": { 00:30:48.418 "name": "nvme0", 00:30:48.418 "trtype": "tcp", 00:30:48.418 "traddr": "10.0.0.2", 00:30:48.418 "adrfam": "ipv4", 00:30:48.418 "trsvcid": "4420", 00:30:48.418 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:48.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:48.418 "prchk_reftag": false, 00:30:48.418 "prchk_guard": false, 00:30:48.418 "hdgst": false, 00:30:48.418 "ddgst": false, 00:30:48.418 "dhchap_key": "key2", 00:30:48.418 "allow_unrecognized_csi": false 00:30:48.418 } 00:30:48.418 } 00:30:48.418 Got JSON-RPC error response 00:30:48.418 GoRPCClient: error on JSON-RPC call 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:48.418 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:48.996 2024/12/05 11:14:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:48.996 request: 00:30:48.996 { 00:30:48.996 "method": "bdev_nvme_attach_controller", 00:30:48.996 "params": { 00:30:48.996 "name": "nvme0", 00:30:48.996 "trtype": "tcp", 00:30:48.996 "traddr": "10.0.0.2", 00:30:48.996 "adrfam": "ipv4", 00:30:48.996 "trsvcid": "4420", 00:30:48.996 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:48.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:48.996 "prchk_reftag": false, 00:30:48.996 "prchk_guard": false, 00:30:48.996 "hdgst": false, 00:30:48.996 "ddgst": false, 00:30:48.996 "dhchap_key": "key1", 00:30:48.996 "dhchap_ctrlr_key": "ckey2", 00:30:48.996 "allow_unrecognized_csi": false 00:30:48.996 } 00:30:48.996 } 00:30:48.996 Got JSON-RPC error response 00:30:48.996 GoRPCClient: error on JSON-RPC call 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:30:48.996 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.997 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:30:48.997 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:48.997 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.997 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:48.997 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:49.588 2024/12/05 11:14:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:49.588 request: 00:30:49.588 { 00:30:49.588 "method": "bdev_nvme_attach_controller", 00:30:49.588 "params": { 00:30:49.588 "name": "nvme0", 00:30:49.588 "trtype": "tcp", 00:30:49.588 "traddr": "10.0.0.2", 00:30:49.588 "adrfam": "ipv4", 00:30:49.588 "trsvcid": "4420", 00:30:49.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:49.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:49.588 "prchk_reftag": false, 00:30:49.588 "prchk_guard": false, 00:30:49.588 "hdgst": false, 00:30:49.588 "ddgst": false, 00:30:49.588 "dhchap_key": "key1", 00:30:49.588 "dhchap_ctrlr_key": "ckey1", 00:30:49.588 "allow_unrecognized_csi": false 00:30:49.588 } 00:30:49.588 } 00:30:49.588 Got JSON-RPC error response 00:30:49.588 GoRPCClient: error on JSON-RPC call 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76777 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76777 ']' 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76777 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.588 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76777 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:49.847 killing process with pid 76777 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76777' 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76777 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76777 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=81766 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 81766 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81766 ']' 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.847 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81766 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81766 ']' 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.217 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.474 null0 00:30:51.474 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.474 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:51.474 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Do5 00:30:51.474 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.474 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.8Sf ]] 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Sf 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uIM 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.474 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.he7 ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.he7 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YMX 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.QZT ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QZT 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.g7a 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:51.475 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:52.410 nvme0n1 00:30:52.668 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:30:52.668 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:30:52.668 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:30:52.926 { 00:30:52.926 "auth": { 00:30:52.926 "dhgroup": "ffdhe8192", 00:30:52.926 "digest": "sha512", 00:30:52.926 "state": "completed" 00:30:52.926 }, 00:30:52.926 "cntlid": 1, 00:30:52.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:52.926 "listen_address": { 00:30:52.926 "adrfam": "IPv4", 00:30:52.926 "traddr": "10.0.0.2", 00:30:52.926 "trsvcid": "4420", 00:30:52.926 "trtype": "TCP" 00:30:52.926 }, 00:30:52.926 "peer_address": { 00:30:52.926 "adrfam": "IPv4", 00:30:52.926 "traddr": "10.0.0.1", 00:30:52.926 "trsvcid": "50078", 00:30:52.926 "trtype": "TCP" 00:30:52.926 }, 00:30:52.926 "qid": 0, 00:30:52.926 "state": "enabled", 00:30:52.926 "thread": "nvmf_tgt_poll_group_000" 00:30:52.926 } 00:30:52.926 ]' 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:52.926 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:53.492 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:53.492 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:30:54.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key3 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:30:54.084 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:54.651 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:54.651 2024/12/05 11:14:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:54.651 request: 00:30:54.651 { 00:30:54.651 "method": "bdev_nvme_attach_controller", 00:30:54.651 "params": { 00:30:54.651 "name": "nvme0", 00:30:54.651 "trtype": "tcp", 00:30:54.651 "traddr": "10.0.0.2", 00:30:54.651 "adrfam": "ipv4", 00:30:54.651 "trsvcid": "4420", 00:30:54.651 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:54.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:54.651 "prchk_reftag": false, 00:30:54.651 "prchk_guard": false, 00:30:54.651 "hdgst": false, 00:30:54.651 "ddgst": false, 00:30:54.651 "dhchap_key": "key3", 00:30:54.651 "allow_unrecognized_csi": false 00:30:54.651 } 00:30:54.651 } 00:30:54.651 Got JSON-RPC error response 00:30:54.652 GoRPCClient: error on JSON-RPC call 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:54.910 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:30:55.476 2024/12/05 11:14:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:55.476 request: 00:30:55.476 { 00:30:55.476 "method": "bdev_nvme_attach_controller", 00:30:55.476 "params": { 00:30:55.476 "name": "nvme0", 00:30:55.476 "trtype": "tcp", 00:30:55.476 "traddr": "10.0.0.2", 00:30:55.476 "adrfam": "ipv4", 00:30:55.476 "trsvcid": "4420", 00:30:55.476 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:55.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:55.476 "prchk_reftag": false, 00:30:55.476 "prchk_guard": false, 00:30:55.476 "hdgst": false, 00:30:55.476 "ddgst": false, 00:30:55.476 "dhchap_key": "key3", 00:30:55.476 "allow_unrecognized_csi": false 00:30:55.476 } 00:30:55.476 } 00:30:55.476 Got JSON-RPC error response 00:30:55.476 GoRPCClient: error on JSON-RPC call 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:55.476 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:55.476 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:55.476 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.476 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.476 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.476 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:30:55.476 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.476 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:55.734 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:30:55.993 2024/12/05 11:14:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:30:55.993 request: 00:30:55.993 { 00:30:55.993 "method": "bdev_nvme_attach_controller", 00:30:55.993 "params": { 00:30:55.993 "name": "nvme0", 00:30:55.993 "trtype": "tcp", 00:30:55.993 "traddr": "10.0.0.2", 00:30:55.993 "adrfam": "ipv4", 00:30:55.993 "trsvcid": "4420", 00:30:55.993 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:30:55.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:30:55.993 "prchk_reftag": false, 00:30:55.993 "prchk_guard": false, 00:30:55.993 "hdgst": false, 00:30:55.993 "ddgst": false, 00:30:55.993 "dhchap_key": "key0", 00:30:55.993 "dhchap_ctrlr_key": "key1", 00:30:55.993 "allow_unrecognized_csi": false 00:30:55.993 } 00:30:55.993 } 00:30:55.993 Got JSON-RPC error response 00:30:55.993 GoRPCClient: error on JSON-RPC call 00:30:55.993 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:30:55.993 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:55.993 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:55.993 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:55.993 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:30:55.993 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:30:55.993 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:30:56.559 nvme0n1 00:30:56.559 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:30:56.559 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:30:56.559 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:56.818 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.818 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:56.818 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:57.077 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 00:30:57.077 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.077 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:57.077 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.077 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:30:57.077 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:30:57.077 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:30:58.011 nvme0n1 00:30:58.270 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:30:58.270 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:58.270 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:30:58.528 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:30:58.788 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.788 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:58.788 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -l 0 --dhchap-secret DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: --dhchap-ctrl-secret DHHC-1:03:OGJkMWU3OWE3YTE0YTAzMjhmMTRhNDhiOWI4MmNlNjdkNTE1NGE5ZmFjNTExZmRkODM3ODNhM2MyYzhhZTE0YvVO1nU=: 00:30:59.723 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:30:59.724 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:30:59.982 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:31:00.547 2024/12/05 11:14:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:00.547 request: 00:31:00.547 { 00:31:00.547 "method": "bdev_nvme_attach_controller", 00:31:00.547 "params": { 00:31:00.547 "name": "nvme0", 00:31:00.547 "trtype": "tcp", 00:31:00.547 "traddr": "10.0.0.2", 00:31:00.547 "adrfam": "ipv4", 00:31:00.547 "trsvcid": "4420", 00:31:00.547 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:31:00.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6", 00:31:00.547 "prchk_reftag": false, 00:31:00.547 "prchk_guard": false, 00:31:00.547 "hdgst": false, 00:31:00.547 "ddgst": false, 00:31:00.547 "dhchap_key": "key1", 00:31:00.547 "allow_unrecognized_csi": false 00:31:00.547 } 00:31:00.547 } 00:31:00.547 Got JSON-RPC error response 00:31:00.547 GoRPCClient: error on JSON-RPC call 00:31:00.547 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:00.547 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:00.547 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:00.547 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:00.547 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:00.547 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:00.547 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:01.921 nvme0n1 00:31:01.921 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:31:01.921 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:31:01.921 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:01.921 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:01.921 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:01.921 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:02.487 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:31:02.488 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.488 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:02.488 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.488 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:31:02.488 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:31:02.488 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:31:02.746 nvme0n1 00:31:02.746 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:31:02.746 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:02.746 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:31:03.005 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:03.005 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:03.005 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: '' 2s 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: ]] 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDE2ZTA3YjRmNWUzZDI5ODU3ZWMxMDRiYWNjYWJhMGGdvww8: 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:31:03.264 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: 2s 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:31:05.273 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: ]] 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTc0NWY4Yzk0N2ZkYTVmNDI1OTJhMzNmZTk5ZGU1ZThjM2Q3NWE2N2FiMTJhNzQ4+b2LMg==: 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:31:05.274 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:07.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:07.802 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:07.803 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:08.372 nvme0n1 00:31:08.372 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:08.372 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.372 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:08.372 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.372 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:08.372 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:09.009 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:31:09.009 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:09.009 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:31:09.267 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.267 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:31:09.267 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.267 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.268 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.268 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:31:09.268 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:31:09.526 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:31:09.526 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:31:09.526 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:09.785 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:31:10.352 2024/12/05 11:14:34 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:31:10.352 request: 00:31:10.352 { 00:31:10.352 "method": "bdev_nvme_set_keys", 00:31:10.352 "params": { 00:31:10.352 "name": "nvme0", 00:31:10.352 "dhchap_key": "key1", 00:31:10.352 "dhchap_ctrlr_key": "key3" 00:31:10.352 } 00:31:10.352 } 00:31:10.352 Got JSON-RPC error response 00:31:10.352 GoRPCClient: error on JSON-RPC call 00:31:10.353 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:10.353 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:10.353 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:10.353 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:10.353 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:31:10.353 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:10.353 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:31:10.919 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:31:10.919 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:31:11.856 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:31:11.856 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:11.856 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:12.114 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:31:13.109 nvme0n1 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:13.109 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:31:13.673 2024/12/05 11:14:38 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:31:13.673 request: 00:31:13.673 { 00:31:13.673 "method": "bdev_nvme_set_keys", 00:31:13.673 "params": { 00:31:13.673 "name": "nvme0", 00:31:13.673 "dhchap_key": "key2", 00:31:13.673 "dhchap_ctrlr_key": "key0" 00:31:13.673 } 00:31:13.673 } 00:31:13.673 Got JSON-RPC error response 00:31:13.673 GoRPCClient: error on JSON-RPC call 00:31:13.930 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:31:13.930 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:13.930 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:13.930 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:13.930 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:31:13.930 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:13.930 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:31:14.187 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:31:14.187 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:31:15.124 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:31:15.124 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:31:15.124 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:15.382 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76821 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76821 ']' 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76821 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:15.382 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76821 00:31:15.641 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:15.641 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:15.641 killing process with pid 76821 00:31:15.641 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76821' 00:31:15.641 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76821 00:31:15.641 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76821 00:31:15.899 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:31:15.899 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:15.899 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:15.900 rmmod nvme_tcp 00:31:15.900 rmmod nvme_fabrics 00:31:15.900 rmmod nvme_keyring 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 81766 ']' 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 81766 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81766 ']' 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81766 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81766 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81766' 00:31:15.900 killing process with pid 81766 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81766 00:31:15.900 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81766 00:31:16.158 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:31:16.159 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:31:16.417 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:16.417 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:31:16.417 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Do5 /tmp/spdk.key-sha256.uIM /tmp/spdk.key-sha384.YMX /tmp/spdk.key-sha512.g7a /tmp/spdk.key-sha512.8Sf /tmp/spdk.key-sha384.he7 /tmp/spdk.key-sha256.QZT '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:31:16.418 00:31:16.418 real 3m20.873s 00:31:16.418 user 7m56.799s 00:31:16.418 sys 0m36.116s 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:16.418 ************************************ 00:31:16.418 END TEST nvmf_auth_target 00:31:16.418 ************************************ 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:16.418 ************************************ 00:31:16.418 START TEST nvmf_bdevio_no_huge 00:31:16.418 ************************************ 00:31:16.418 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:31:16.418 * Looking for test storage... 00:31:16.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:16.418 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:16.418 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:31:16.418 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.702 --rc genhtml_branch_coverage=1 00:31:16.702 --rc genhtml_function_coverage=1 00:31:16.702 --rc genhtml_legend=1 00:31:16.702 --rc geninfo_all_blocks=1 00:31:16.702 --rc geninfo_unexecuted_blocks=1 00:31:16.702 00:31:16.702 ' 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.702 --rc genhtml_branch_coverage=1 00:31:16.702 --rc genhtml_function_coverage=1 00:31:16.702 --rc genhtml_legend=1 00:31:16.702 --rc geninfo_all_blocks=1 00:31:16.702 --rc geninfo_unexecuted_blocks=1 00:31:16.702 00:31:16.702 ' 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.702 --rc genhtml_branch_coverage=1 00:31:16.702 --rc genhtml_function_coverage=1 00:31:16.702 --rc genhtml_legend=1 00:31:16.702 --rc geninfo_all_blocks=1 00:31:16.702 --rc geninfo_unexecuted_blocks=1 00:31:16.702 00:31:16.702 ' 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:16.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.702 --rc genhtml_branch_coverage=1 00:31:16.702 --rc genhtml_function_coverage=1 00:31:16.702 --rc genhtml_legend=1 00:31:16.702 --rc geninfo_all_blocks=1 00:31:16.702 --rc geninfo_unexecuted_blocks=1 00:31:16.702 00:31:16.702 ' 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.702 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:16.703 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@280 -- # nvmf_veth_init 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@223 -- # create_target_ns 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # create_main_bridge 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@105 -- # delete_main_bridge 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:16.703 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator0 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target0 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0 up 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target0 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:16.704 10.0.0.1 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:16.704 10.0.0.2 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator0 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:31:16.704 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target0_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator1 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target1 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1 up 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target1 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772163 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:31:16.965 10.0.0.3 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772164 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:31:16.965 10.0.0.4 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator1 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:31:16.965 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target1_br 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 2 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:16.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:31:16.966 00:31:16.966 --- 10.0.0.1 ping statistics --- 00:31:16.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.966 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:16.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:31:16.966 00:31:16.966 --- 10.0.0.2 ping statistics --- 00:31:16.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.966 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:31:16.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:16.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:31:16.966 00:31:16.966 --- 10.0.0.3 ping statistics --- 00:31:16.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.966 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:16.966 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:16.967 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:16.967 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:31:16.967 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:16.967 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:16.967 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:31:16.967 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:31:17.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:17.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:31:17.227 00:31:17.227 --- 10.0.0.4 ping statistics --- 00:31:17.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.227 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # return 0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.227 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:31:17.228 ' 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=82645 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 82645 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82645 ']' 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:31:17.228 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:17.228 [2024-12-05 11:14:41.800115] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:17.228 [2024-12-05 11:14:41.800230] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:31:17.487 [2024-12-05 11:14:41.977459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.487 [2024-12-05 11:14:42.071431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.487 [2024-12-05 11:14:42.071500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.487 [2024-12-05 11:14:42.071516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.487 [2024-12-05 11:14:42.071530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.487 [2024-12-05 11:14:42.071541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.487 [2024-12-05 11:14:42.072576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:17.487 [2024-12-05 11:14:42.073790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:17.487 [2024-12-05 11:14:42.073891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.487 [2024-12-05 11:14:42.073891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.425 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:18.425 [2024-12-05 11:14:42.989023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:18.425 Malloc0 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:18.425 [2024-12-05 11:14:43.045520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:18.425 { 00:31:18.425 "params": { 00:31:18.425 "name": "Nvme$subsystem", 00:31:18.425 "trtype": "$TEST_TRANSPORT", 00:31:18.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.425 "adrfam": "ipv4", 00:31:18.425 "trsvcid": "$NVMF_PORT", 00:31:18.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.425 "hdgst": ${hdgst:-false}, 00:31:18.425 "ddgst": ${ddgst:-false} 00:31:18.425 }, 00:31:18.425 "method": "bdev_nvme_attach_controller" 00:31:18.425 } 00:31:18.425 EOF 00:31:18.425 )") 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:31:18.425 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:18.425 "params": { 00:31:18.425 "name": "Nvme1", 00:31:18.426 "trtype": "tcp", 00:31:18.426 "traddr": "10.0.0.2", 00:31:18.426 "adrfam": "ipv4", 00:31:18.426 "trsvcid": "4420", 00:31:18.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:18.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:18.426 "hdgst": false, 00:31:18.426 "ddgst": false 00:31:18.426 }, 00:31:18.426 "method": "bdev_nvme_attach_controller" 00:31:18.426 }' 00:31:18.685 [2024-12-05 11:14:43.110061] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:18.685 [2024-12-05 11:14:43.110162] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82705 ] 00:31:18.685 [2024-12-05 11:14:43.278018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:18.943 [2024-12-05 11:14:43.381693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.943 [2024-12-05 11:14:43.381787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.943 [2024-12-05 11:14:43.381795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.201 I/O targets: 00:31:19.201 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:19.201 00:31:19.201 00:31:19.201 CUnit - A unit testing framework for C - Version 2.1-3 00:31:19.201 http://cunit.sourceforge.net/ 00:31:19.201 00:31:19.201 00:31:19.201 Suite: bdevio tests on: Nvme1n1 00:31:19.201 Test: blockdev write read block ...passed 00:31:19.201 Test: blockdev write zeroes read block ...passed 00:31:19.201 Test: blockdev write zeroes read no split ...passed 00:31:19.201 Test: blockdev write zeroes read split ...passed 00:31:19.201 Test: blockdev write zeroes read split partial ...passed 00:31:19.201 Test: blockdev reset ...[2024-12-05 11:14:43.750330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:19.201 [2024-12-05 11:14:43.750448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244eeb0 (9): Bad file descriptor 00:31:19.201 [2024-12-05 11:14:43.763470] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:19.201 passed 00:31:19.201 Test: blockdev write read 8 blocks ...passed 00:31:19.202 Test: blockdev write read size > 128k ...passed 00:31:19.202 Test: blockdev write read invalid size ...passed 00:31:19.202 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:19.202 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:19.202 Test: blockdev write read max offset ...passed 00:31:19.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:19.464 Test: blockdev writev readv 8 blocks ...passed 00:31:19.464 Test: blockdev writev readv 30 x 1block ...passed 00:31:19.464 Test: blockdev writev readv block ...passed 00:31:19.464 Test: blockdev writev readv size > 128k ...passed 00:31:19.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:19.464 Test: blockdev comparev and writev ...[2024-12-05 11:14:43.936566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.464 [2024-12-05 11:14:43.936623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:19.464 [2024-12-05 11:14:43.936641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.464 [2024-12-05 11:14:43.936654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:19.464 [2024-12-05 11:14:43.937225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.464 [2024-12-05 11:14:43.937248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:19.464 [2024-12-05 11:14:43.937263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.464 [2024-12-05 11:14:43.937274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:19.464 [2024-12-05 11:14:43.937705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.464 [2024-12-05 11:14:43.937727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:19.464 [2024-12-05 11:14:43.937742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.464 [2024-12-05 11:14:43.937752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:19.465 [2024-12-05 11:14:43.938187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.465 [2024-12-05 11:14:43.938208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:19.465 [2024-12-05 11:14:43.938223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:19.465 [2024-12-05 11:14:43.938234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:19.465 passed 00:31:19.465 Test: blockdev nvme passthru rw ...passed 00:31:19.465 Test: blockdev nvme passthru vendor specific ...[2024-12-05 11:14:44.020951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:19.465 [2024-12-05 11:14:44.021001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:19.465 [2024-12-05 11:14:44.021115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:19.465 [2024-12-05 11:14:44.021129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:19.465 [2024-12-05 11:14:44.021226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:19.465 [2024-12-05 11:14:44.021239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:19.465 [2024-12-05 11:14:44.021358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:19.465 [2024-12-05 11:14:44.021370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:19.465 passed 00:31:19.465 Test: blockdev nvme admin passthru ...passed 00:31:19.465 Test: blockdev copy ...passed 00:31:19.465 00:31:19.465 Run Summary: Type Total Ran Passed Failed Inactive 00:31:19.465 suites 1 1 n/a 0 0 00:31:19.465 tests 23 23 23 0 0 00:31:19.465 asserts 152 152 152 0 n/a 00:31:19.465 00:31:19.465 Elapsed time = 0.945 seconds 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:20.032 rmmod nvme_tcp 00:31:20.032 rmmod nvme_fabrics 00:31:20.032 rmmod nvme_keyring 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 82645 ']' 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 82645 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82645 ']' 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82645 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:20.032 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82645 00:31:20.291 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:20.291 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:20.291 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82645' 00:31:20.291 killing process with pid 82645 00:31:20.291 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82645 00:31:20.291 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82645 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:31:20.549 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:31:20.807 00:31:20.807 real 0m4.400s 00:31:20.807 user 0m14.621s 00:31:20.807 sys 0m1.842s 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:31:20.807 ************************************ 00:31:20.807 END TEST nvmf_bdevio_no_huge 00:31:20.807 ************************************ 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:20.807 ************************************ 00:31:20.807 START TEST nvmf_tls 00:31:20.807 ************************************ 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:31:20.807 * Looking for test storage... 00:31:20.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:20.807 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:21.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.065 --rc genhtml_branch_coverage=1 00:31:21.065 --rc genhtml_function_coverage=1 00:31:21.065 --rc genhtml_legend=1 00:31:21.065 --rc geninfo_all_blocks=1 00:31:21.065 --rc geninfo_unexecuted_blocks=1 00:31:21.065 00:31:21.065 ' 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:21.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.065 --rc genhtml_branch_coverage=1 00:31:21.065 --rc genhtml_function_coverage=1 00:31:21.065 --rc genhtml_legend=1 00:31:21.065 --rc geninfo_all_blocks=1 00:31:21.065 --rc geninfo_unexecuted_blocks=1 00:31:21.065 00:31:21.065 ' 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:21.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.065 --rc genhtml_branch_coverage=1 00:31:21.065 --rc genhtml_function_coverage=1 00:31:21.065 --rc genhtml_legend=1 00:31:21.065 --rc geninfo_all_blocks=1 00:31:21.065 --rc geninfo_unexecuted_blocks=1 00:31:21.065 00:31:21.065 ' 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:21.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.065 --rc genhtml_branch_coverage=1 00:31:21.065 --rc genhtml_function_coverage=1 00:31:21.065 --rc genhtml_legend=1 00:31:21.065 --rc geninfo_all_blocks=1 00:31:21.065 --rc geninfo_unexecuted_blocks=1 00:31:21.065 00:31:21.065 ' 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.065 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:21.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@280 -- # nvmf_veth_init 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@223 -- # create_target_ns 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # create_main_bridge 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@105 -- # delete_main_bridge 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator0 00:31:21.066 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target0 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0 up 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target0_br 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target0 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:31:21.067 10.0.0.1 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:21.067 10.0.0.2 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator0 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:31:21.067 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target0_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator1 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target1 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1 up 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target1_br 00:31:21.326 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772163 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:31:21.327 10.0.0.3 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772164 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:31:21.327 10.0.0.4 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target1_br 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 2 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:21.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:31:21.327 00:31:21.327 --- 10.0.0.1 ping statistics --- 00:31:21.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.327 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:31:21.327 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:21.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:31:21.328 00:31:21.328 --- 10.0.0.2 ping statistics --- 00:31:21.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.328 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:31:21.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:21.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:31:21.328 00:31:21.328 --- 10.0.0.3 ping statistics --- 00:31:21.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.328 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.328 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:31:21.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:21.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:31:21.586 00:31:21.586 --- 10.0.0.4 ping statistics --- 00:31:21.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.586 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # return 0 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:31:21.586 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:21.586 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:31:21.587 ' 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=82949 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 82949 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82949 ']' 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.587 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:21.587 [2024-12-05 11:14:46.159572] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:21.587 [2024-12-05 11:14:46.159689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.845 [2024-12-05 11:14:46.317290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.845 [2024-12-05 11:14:46.380004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.845 [2024-12-05 11:14:46.380095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.845 [2024-12-05 11:14:46.380110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.845 [2024-12-05 11:14:46.380123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.845 [2024-12-05 11:14:46.380134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.845 [2024-12-05 11:14:46.380499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.778 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:22.778 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:22.779 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:22.779 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.779 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:22.779 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.779 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:31:22.779 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:31:23.036 true 00:31:23.036 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:23.036 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:31:23.602 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:31:23.602 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:31:23.602 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:31:23.860 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:23.860 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:31:24.425 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:31:24.425 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:31:24.425 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:31:24.990 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:24.990 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:31:25.248 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:31:25.248 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:31:25.248 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:25.248 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:31:25.506 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:31:25.506 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:31:25.506 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:31:25.506 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:25.506 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:31:25.764 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:31:25.764 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:31:25.764 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:31:26.024 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:31:26.024 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Br65ye2EuR 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:31:26.284 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.37RfhDCClp 00:31:26.285 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:26.285 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:31:26.285 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Br65ye2EuR 00:31:26.285 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.37RfhDCClp 00:31:26.285 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:31:26.545 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:31:26.859 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Br65ye2EuR 00:31:26.859 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Br65ye2EuR 00:31:26.859 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:27.149 [2024-12-05 11:14:51.674295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.149 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:27.408 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:27.667 [2024-12-05 11:14:52.190321] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:27.667 [2024-12-05 11:14:52.190524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.667 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:27.926 malloc0 00:31:27.926 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:28.185 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Br65ye2EuR 00:31:28.444 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:31:28.702 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Br65ye2EuR 00:31:40.904 Initializing NVMe Controllers 00:31:40.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.904 Initialization complete. Launching workers. 00:31:40.904 ======================================================== 00:31:40.904 Latency(us) 00:31:40.904 Device Information : IOPS MiB/s Average min max 00:31:40.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12457.69 48.66 5137.99 1652.25 15998.72 00:31:40.904 ======================================================== 00:31:40.904 Total : 12457.69 48.66 5137.99 1652.25 15998.72 00:31:40.904 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Br65ye2EuR 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Br65ye2EuR 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83316 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83316 /var/tmp/bdevperf.sock 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83316 ']' 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:40.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:40.904 [2024-12-05 11:15:03.410651] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:40.904 [2024-12-05 11:15:03.410760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83316 ] 00:31:40.904 [2024-12-05 11:15:03.564353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.904 [2024-12-05 11:15:03.629021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:40.904 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Br65ye2EuR 00:31:40.904 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:40.904 [2024-12-05 11:15:04.272973] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:40.904 TLSTESTn1 00:31:40.904 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:40.904 Running I/O for 10 seconds... 00:31:41.835 5443.00 IOPS, 21.26 MiB/s [2024-12-05T11:15:07.861Z] 5398.50 IOPS, 21.09 MiB/s [2024-12-05T11:15:08.796Z] 5376.67 IOPS, 21.00 MiB/s [2024-12-05T11:15:09.730Z] 5298.00 IOPS, 20.70 MiB/s [2024-12-05T11:15:10.670Z] 5237.20 IOPS, 20.46 MiB/s [2024-12-05T11:15:11.607Z] 5239.83 IOPS, 20.47 MiB/s [2024-12-05T11:15:12.540Z] 5261.86 IOPS, 20.55 MiB/s [2024-12-05T11:15:13.474Z] 5240.62 IOPS, 20.47 MiB/s [2024-12-05T11:15:14.902Z] 5235.78 IOPS, 20.45 MiB/s [2024-12-05T11:15:14.902Z] 5244.10 IOPS, 20.48 MiB/s 00:31:50.250 Latency(us) 00:31:50.250 [2024-12-05T11:15:14.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.250 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:50.250 Verification LBA range: start 0x0 length 0x2000 00:31:50.250 TLSTESTn1 : 10.01 5249.85 20.51 0.00 0.00 24342.26 5055.63 20721.86 00:31:50.250 [2024-12-05T11:15:14.902Z] =================================================================================================================== 00:31:50.250 [2024-12-05T11:15:14.902Z] Total : 5249.85 20.51 0.00 0.00 24342.26 5055.63 20721.86 00:31:50.250 { 00:31:50.250 "results": [ 00:31:50.250 { 00:31:50.250 "job": "TLSTESTn1", 00:31:50.250 "core_mask": "0x4", 00:31:50.250 "workload": "verify", 00:31:50.250 "status": "finished", 00:31:50.250 "verify_range": { 00:31:50.250 "start": 0, 00:31:50.250 "length": 8192 00:31:50.250 }, 00:31:50.250 "queue_depth": 128, 00:31:50.250 "io_size": 4096, 00:31:50.250 "runtime": 10.013424, 00:31:50.250 "iops": 5249.852597872616, 00:31:50.250 "mibps": 20.507236710439905, 00:31:50.250 "io_failed": 0, 00:31:50.250 "io_timeout": 0, 00:31:50.250 "avg_latency_us": 24342.257491967473, 00:31:50.250 "min_latency_us": 5055.634285714285, 00:31:50.251 "max_latency_us": 20721.859047619047 00:31:50.251 } 00:31:50.251 ], 00:31:50.251 "core_count": 1 00:31:50.251 } 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83316 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83316 ']' 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83316 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83316 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:50.251 killing process with pid 83316 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83316' 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83316 00:31:50.251 Received shutdown signal, test time was about 10.000000 seconds 00:31:50.251 00:31:50.251 Latency(us) 00:31:50.251 [2024-12-05T11:15:14.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.251 [2024-12-05T11:15:14.903Z] =================================================================================================================== 00:31:50.251 [2024-12-05T11:15:14.903Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83316 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.37RfhDCClp 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.37RfhDCClp 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.37RfhDCClp 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.37RfhDCClp 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83467 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83467 /var/tmp/bdevperf.sock 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83467 ']' 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.251 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:50.251 [2024-12-05 11:15:14.767413] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:50.251 [2024-12-05 11:15:14.767484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83467 ] 00:31:50.528 [2024-12-05 11:15:14.910383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:50.528 [2024-12-05 11:15:14.965217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.528 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.528 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:50.528 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.37RfhDCClp 00:31:50.785 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:51.044 [2024-12-05 11:15:15.593066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:51.044 [2024-12-05 11:15:15.601054] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:51.044 [2024-12-05 11:15:15.601638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1537b00 (107): Transport endpoint is not connected 00:31:51.044 [2024-12-05 11:15:15.602628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1537b00 (9): Bad file descriptor 00:31:51.044 [2024-12-05 11:15:15.603643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:31:51.044 [2024-12-05 11:15:15.603663] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:51.044 [2024-12-05 11:15:15.603673] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:31:51.044 [2024-12-05 11:15:15.603687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:31:51.044 2024/12/05 11:15:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:51.044 request: 00:31:51.044 { 00:31:51.044 "method": "bdev_nvme_attach_controller", 00:31:51.044 "params": { 00:31:51.044 "name": "TLSTEST", 00:31:51.044 "trtype": "tcp", 00:31:51.044 "traddr": "10.0.0.2", 00:31:51.044 "adrfam": "ipv4", 00:31:51.044 "trsvcid": "4420", 00:31:51.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:51.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:51.044 "prchk_reftag": false, 00:31:51.044 "prchk_guard": false, 00:31:51.044 "hdgst": false, 00:31:51.044 "ddgst": false, 00:31:51.044 "psk": "key0", 00:31:51.044 "allow_unrecognized_csi": false 00:31:51.044 } 00:31:51.044 } 00:31:51.044 Got JSON-RPC error response 00:31:51.044 GoRPCClient: error on JSON-RPC call 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83467 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83467 ']' 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83467 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83467 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:51.044 killing process with pid 83467 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83467' 00:31:51.044 Received shutdown signal, test time was about 10.000000 seconds 00:31:51.044 00:31:51.044 Latency(us) 00:31:51.044 [2024-12-05T11:15:15.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.044 [2024-12-05T11:15:15.696Z] =================================================================================================================== 00:31:51.044 [2024-12-05T11:15:15.696Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83467 00:31:51.044 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83467 00:31:51.303 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:51.303 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:31:51.303 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:51.303 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:51.303 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:51.303 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Br65ye2EuR 00:31:51.303 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Br65ye2EuR 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Br65ye2EuR 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Br65ye2EuR 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83506 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83506 /var/tmp/bdevperf.sock 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83506 ']' 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:51.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.304 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:51.304 [2024-12-05 11:15:15.904363] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:51.304 [2024-12-05 11:15:15.904462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83506 ] 00:31:51.562 [2024-12-05 11:15:16.050445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.562 [2024-12-05 11:15:16.103716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:52.496 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:52.496 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:52.496 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Br65ye2EuR 00:31:52.496 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:31:52.755 [2024-12-05 11:15:17.354177] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:52.755 [2024-12-05 11:15:17.361341] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:52.755 [2024-12-05 11:15:17.361384] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:31:52.755 [2024-12-05 11:15:17.361433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:52.755 [2024-12-05 11:15:17.361856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea4b00 (107): Transport endpoint is not connected 00:31:52.755 [2024-12-05 11:15:17.362846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea4b00 (9): Bad file descriptor 00:31:52.755 [2024-12-05 11:15:17.363843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:31:52.755 [2024-12-05 11:15:17.363863] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:52.755 [2024-12-05 11:15:17.363874] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:31:52.755 [2024-12-05 11:15:17.363895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:31:52.755 2024/12/05 11:15:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:52.755 request: 00:31:52.755 { 00:31:52.755 "method": "bdev_nvme_attach_controller", 00:31:52.755 "params": { 00:31:52.755 "name": "TLSTEST", 00:31:52.755 "trtype": "tcp", 00:31:52.755 "traddr": "10.0.0.2", 00:31:52.755 "adrfam": "ipv4", 00:31:52.755 "trsvcid": "4420", 00:31:52.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:52.755 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:52.755 "prchk_reftag": false, 00:31:52.755 "prchk_guard": false, 00:31:52.755 "hdgst": false, 00:31:52.755 "ddgst": false, 00:31:52.755 "psk": "key0", 00:31:52.755 "allow_unrecognized_csi": false 00:31:52.755 } 00:31:52.755 } 00:31:52.755 Got JSON-RPC error response 00:31:52.755 GoRPCClient: error on JSON-RPC call 00:31:52.755 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83506 00:31:52.755 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83506 ']' 00:31:52.755 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83506 00:31:52.755 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:31:52.755 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.755 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83506 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:53.015 killing process with pid 83506 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83506' 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83506 00:31:53.015 Received shutdown signal, test time was about 10.000000 seconds 00:31:53.015 00:31:53.015 Latency(us) 00:31:53.015 [2024-12-05T11:15:17.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.015 [2024-12-05T11:15:17.667Z] =================================================================================================================== 00:31:53.015 [2024-12-05T11:15:17.667Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83506 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Br65ye2EuR 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Br65ye2EuR 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Br65ye2EuR 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Br65ye2EuR 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83558 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83558 /var/tmp/bdevperf.sock 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83558 ']' 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:53.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:53.015 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:53.015 [2024-12-05 11:15:17.637274] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:53.015 [2024-12-05 11:15:17.637349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83558 ] 00:31:53.274 [2024-12-05 11:15:17.779602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.274 [2024-12-05 11:15:17.834846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.533 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:53.533 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:53.533 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Br65ye2EuR 00:31:53.533 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:53.792 [2024-12-05 11:15:18.426859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:53.792 [2024-12-05 11:15:18.431710] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:53.793 [2024-12-05 11:15:18.431749] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:31:53.793 [2024-12-05 11:15:18.431796] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:53.793 [2024-12-05 11:15:18.432445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6b00 (107): Transport endpoint is not connected 00:31:53.793 [2024-12-05 11:15:18.433433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d6b00 (9): Bad file descriptor 00:31:53.793 [2024-12-05 11:15:18.434431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:31:53.793 [2024-12-05 11:15:18.434450] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:31:53.793 [2024-12-05 11:15:18.434461] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:31:53.793 [2024-12-05 11:15:18.434477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:31:53.793 2024/12/05 11:15:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:31:53.793 request: 00:31:53.793 { 00:31:53.793 "method": "bdev_nvme_attach_controller", 00:31:53.793 "params": { 00:31:53.793 "name": "TLSTEST", 00:31:53.793 "trtype": "tcp", 00:31:53.793 "traddr": "10.0.0.2", 00:31:53.793 "adrfam": "ipv4", 00:31:53.793 "trsvcid": "4420", 00:31:53.793 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:53.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.793 "prchk_reftag": false, 00:31:53.793 "prchk_guard": false, 00:31:53.793 "hdgst": false, 00:31:53.793 "ddgst": false, 00:31:53.793 "psk": "key0", 00:31:53.793 "allow_unrecognized_csi": false 00:31:53.793 } 00:31:53.793 } 00:31:53.793 Got JSON-RPC error response 00:31:53.793 GoRPCClient: error on JSON-RPC call 00:31:54.051 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83558 00:31:54.051 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83558 ']' 00:31:54.051 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83558 00:31:54.051 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:31:54.051 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.051 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83558 00:31:54.051 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:54.052 killing process with pid 83558 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83558' 00:31:54.052 Received shutdown signal, test time was about 10.000000 seconds 00:31:54.052 00:31:54.052 Latency(us) 00:31:54.052 [2024-12-05T11:15:18.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.052 [2024-12-05T11:15:18.704Z] =================================================================================================================== 00:31:54.052 [2024-12-05T11:15:18.704Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83558 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83558 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83597 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83597 /var/tmp/bdevperf.sock 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83597 ']' 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.052 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:54.310 [2024-12-05 11:15:18.710460] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:54.310 [2024-12-05 11:15:18.710553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83597 ] 00:31:54.310 [2024-12-05 11:15:18.855600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.310 [2024-12-05 11:15:18.911691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.246 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.246 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:55.246 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:31:55.246 [2024-12-05 11:15:19.859404] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:31:55.246 [2024-12-05 11:15:19.859444] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:55.246 2024/12/05 11:15:19 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:55.246 request: 00:31:55.246 { 00:31:55.246 "method": "keyring_file_add_key", 00:31:55.246 "params": { 00:31:55.246 "name": "key0", 00:31:55.246 "path": "" 00:31:55.246 } 00:31:55.246 } 00:31:55.246 Got JSON-RPC error response 00:31:55.246 GoRPCClient: error on JSON-RPC call 00:31:55.246 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:55.505 [2024-12-05 11:15:20.087561] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:55.505 [2024-12-05 11:15:20.087626] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:31:55.505 2024/12/05 11:15:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:31:55.505 request: 00:31:55.505 { 00:31:55.505 "method": "bdev_nvme_attach_controller", 00:31:55.505 "params": { 00:31:55.505 "name": "TLSTEST", 00:31:55.505 "trtype": "tcp", 00:31:55.505 "traddr": "10.0.0.2", 00:31:55.505 "adrfam": "ipv4", 00:31:55.505 "trsvcid": "4420", 00:31:55.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:55.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:55.505 "prchk_reftag": false, 00:31:55.505 "prchk_guard": false, 00:31:55.505 "hdgst": false, 00:31:55.505 "ddgst": false, 00:31:55.505 "psk": "key0", 00:31:55.505 "allow_unrecognized_csi": false 00:31:55.505 } 00:31:55.505 } 00:31:55.505 Got JSON-RPC error response 00:31:55.505 GoRPCClient: error on JSON-RPC call 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83597 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83597 ']' 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83597 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83597 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:55.505 killing process with pid 83597 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83597' 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83597 00:31:55.505 Received shutdown signal, test time was about 10.000000 seconds 00:31:55.505 00:31:55.505 Latency(us) 00:31:55.505 [2024-12-05T11:15:20.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.505 [2024-12-05T11:15:20.157Z] =================================================================================================================== 00:31:55.505 [2024-12-05T11:15:20.157Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:55.505 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83597 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82949 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82949 ']' 00:31:55.764 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82949 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82949 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:56.024 killing process with pid 82949 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82949' 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82949 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82949 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:31:56.024 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.z78ooGyyDB 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.z78ooGyyDB 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=83661 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 83661 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83661 ']' 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.284 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:56.284 [2024-12-05 11:15:20.777124] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:56.284 [2024-12-05 11:15:20.777217] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.284 [2024-12-05 11:15:20.933708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.543 [2024-12-05 11:15:20.994857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.543 [2024-12-05 11:15:20.994916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.543 [2024-12-05 11:15:20.994932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.543 [2024-12-05 11:15:20.994945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.543 [2024-12-05 11:15:20.994956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.543 [2024-12-05 11:15:20.995327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.110 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.110 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:57.110 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:57.110 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:57.110 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:57.369 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.369 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.z78ooGyyDB 00:31:57.369 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z78ooGyyDB 00:31:57.369 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:57.627 [2024-12-05 11:15:22.102117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.627 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:57.884 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:58.243 [2024-12-05 11:15:22.658328] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:58.243 [2024-12-05 11:15:22.658559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.243 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:58.526 malloc0 00:31:58.526 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:58.526 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:31:58.783 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z78ooGyyDB 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z78ooGyyDB 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83776 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83776 /var/tmp/bdevperf.sock 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83776 ']' 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:59.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.040 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:59.041 [2024-12-05 11:15:23.688724] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:59.041 [2024-12-05 11:15:23.688804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83776 ] 00:31:59.297 [2024-12-05 11:15:23.829738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.297 [2024-12-05 11:15:23.909680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.554 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.554 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:31:59.554 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:31:59.811 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:00.068 [2024-12-05 11:15:24.489317] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:00.068 TLSTESTn1 00:32:00.068 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:32:00.068 Running I/O for 10 seconds... 00:32:02.369 4928.00 IOPS, 19.25 MiB/s [2024-12-05T11:15:27.969Z] 4915.50 IOPS, 19.20 MiB/s [2024-12-05T11:15:28.923Z] 5085.33 IOPS, 19.86 MiB/s [2024-12-05T11:15:29.858Z] 5182.75 IOPS, 20.25 MiB/s [2024-12-05T11:15:30.794Z] 5266.40 IOPS, 20.57 MiB/s [2024-12-05T11:15:31.732Z] 5298.33 IOPS, 20.70 MiB/s [2024-12-05T11:15:33.110Z] 5347.86 IOPS, 20.89 MiB/s [2024-12-05T11:15:33.678Z] 5381.25 IOPS, 21.02 MiB/s [2024-12-05T11:15:35.055Z] 5375.11 IOPS, 21.00 MiB/s [2024-12-05T11:15:35.055Z] 5391.90 IOPS, 21.06 MiB/s 00:32:10.403 Latency(us) 00:32:10.403 [2024-12-05T11:15:35.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.403 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:10.403 Verification LBA range: start 0x0 length 0x2000 00:32:10.403 TLSTESTn1 : 10.01 5398.07 21.09 0.00 0.00 23675.76 4056.99 33204.91 00:32:10.403 [2024-12-05T11:15:35.055Z] =================================================================================================================== 00:32:10.403 [2024-12-05T11:15:35.055Z] Total : 5398.07 21.09 0.00 0.00 23675.76 4056.99 33204.91 00:32:10.403 { 00:32:10.403 "results": [ 00:32:10.403 { 00:32:10.403 "job": "TLSTESTn1", 00:32:10.403 "core_mask": "0x4", 00:32:10.403 "workload": "verify", 00:32:10.403 "status": "finished", 00:32:10.403 "verify_range": { 00:32:10.403 "start": 0, 00:32:10.403 "length": 8192 00:32:10.403 }, 00:32:10.403 "queue_depth": 128, 00:32:10.403 "io_size": 4096, 00:32:10.403 "runtime": 10.012282, 00:32:10.403 "iops": 5398.070090315075, 00:32:10.403 "mibps": 21.08621129029326, 00:32:10.403 "io_failed": 0, 00:32:10.403 "io_timeout": 0, 00:32:10.403 "avg_latency_us": 23675.76351912401, 00:32:10.403 "min_latency_us": 4056.9904761904763, 00:32:10.403 "max_latency_us": 33204.90666666667 00:32:10.403 } 00:32:10.403 ], 00:32:10.403 "core_count": 1 00:32:10.403 } 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83776 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83776 ']' 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83776 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83776 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:10.403 killing process with pid 83776 00:32:10.403 Received shutdown signal, test time was about 10.000000 seconds 00:32:10.403 00:32:10.403 Latency(us) 00:32:10.403 [2024-12-05T11:15:35.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.403 [2024-12-05T11:15:35.055Z] =================================================================================================================== 00:32:10.403 [2024-12-05T11:15:35.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83776' 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83776 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83776 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.z78ooGyyDB 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z78ooGyyDB 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z78ooGyyDB 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.z78ooGyyDB 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.z78ooGyyDB 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83923 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:10.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83923 /var/tmp/bdevperf.sock 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83923 ']' 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.403 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:10.403 [2024-12-05 11:15:34.971343] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:10.403 [2024-12-05 11:15:34.971417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83923 ] 00:32:10.670 [2024-12-05 11:15:35.109977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.670 [2024-12-05 11:15:35.155848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.670 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.670 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:10.670 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:32:10.929 [2024-12-05 11:15:35.507267] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z78ooGyyDB': 0100666 00:32:10.929 [2024-12-05 11:15:35.507310] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:10.929 2024/12/05 11:15:35 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.z78ooGyyDB], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:32:10.929 request: 00:32:10.929 { 00:32:10.929 "method": "keyring_file_add_key", 00:32:10.929 "params": { 00:32:10.929 "name": "key0", 00:32:10.929 "path": "/tmp/tmp.z78ooGyyDB" 00:32:10.929 } 00:32:10.929 } 00:32:10.929 Got JSON-RPC error response 00:32:10.929 GoRPCClient: error on JSON-RPC call 00:32:10.929 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:11.187 [2024-12-05 11:15:35.787418] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:11.187 [2024-12-05 11:15:35.787469] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:32:11.187 2024/12/05 11:15:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:32:11.187 request: 00:32:11.187 { 00:32:11.187 "method": "bdev_nvme_attach_controller", 00:32:11.187 "params": { 00:32:11.187 "name": "TLSTEST", 00:32:11.187 "trtype": "tcp", 00:32:11.187 "traddr": "10.0.0.2", 00:32:11.187 "adrfam": "ipv4", 00:32:11.187 "trsvcid": "4420", 00:32:11.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:11.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:11.187 "prchk_reftag": false, 00:32:11.187 "prchk_guard": false, 00:32:11.187 "hdgst": false, 00:32:11.187 "ddgst": false, 00:32:11.187 "psk": "key0", 00:32:11.187 "allow_unrecognized_csi": false 00:32:11.187 } 00:32:11.187 } 00:32:11.187 Got JSON-RPC error response 00:32:11.187 GoRPCClient: error on JSON-RPC call 00:32:11.187 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83923 00:32:11.187 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83923 ']' 00:32:11.187 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83923 00:32:11.187 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:11.187 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.187 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83923 00:32:11.446 killing process with pid 83923 00:32:11.446 Received shutdown signal, test time was about 10.000000 seconds 00:32:11.446 00:32:11.446 Latency(us) 00:32:11.446 [2024-12-05T11:15:36.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.446 [2024-12-05T11:15:36.098Z] =================================================================================================================== 00:32:11.446 [2024-12-05T11:15:36.098Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:11.446 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:11.446 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:11.446 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83923' 00:32:11.446 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83923 00:32:11.446 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83923 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83661 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83661 ']' 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83661 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83661 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:11.446 killing process with pid 83661 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83661' 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83661 00:32:11.446 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83661 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=83967 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 83967 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83967 ']' 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:11.704 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:11.704 [2024-12-05 11:15:36.276411] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:11.704 [2024-12-05 11:15:36.276502] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.963 [2024-12-05 11:15:36.415097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.963 [2024-12-05 11:15:36.467476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.963 [2024-12-05 11:15:36.467523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.963 [2024-12-05 11:15:36.467533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.963 [2024-12-05 11:15:36.467541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.963 [2024-12-05 11:15:36.467548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.963 [2024-12-05 11:15:36.467839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.z78ooGyyDB 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.z78ooGyyDB 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.z78ooGyyDB 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z78ooGyyDB 00:32:12.899 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:13.159 [2024-12-05 11:15:37.577420] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.159 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:13.417 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:13.675 [2024-12-05 11:15:38.105487] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:13.675 [2024-12-05 11:15:38.105713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.675 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:13.933 malloc0 00:32:13.933 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:14.191 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:32:14.449 [2024-12-05 11:15:38.902469] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z78ooGyyDB': 0100666 00:32:14.449 [2024-12-05 11:15:38.902515] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:14.449 2024/12/05 11:15:38 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.z78ooGyyDB], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:32:14.449 request: 00:32:14.449 { 00:32:14.449 "method": "keyring_file_add_key", 00:32:14.449 "params": { 00:32:14.449 "name": "key0", 00:32:14.449 "path": "/tmp/tmp.z78ooGyyDB" 00:32:14.449 } 00:32:14.449 } 00:32:14.449 Got JSON-RPC error response 00:32:14.449 GoRPCClient: error on JSON-RPC call 00:32:14.449 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:32:14.709 [2024-12-05 11:15:39.146548] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:32:14.709 [2024-12-05 11:15:39.146624] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:32:14.709 2024/12/05 11:15:39 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:32:14.709 request: 00:32:14.709 { 00:32:14.709 "method": "nvmf_subsystem_add_host", 00:32:14.709 "params": { 00:32:14.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:14.709 "host": "nqn.2016-06.io.spdk:host1", 00:32:14.709 "psk": "key0" 00:32:14.709 } 00:32:14.709 } 00:32:14.709 Got JSON-RPC error response 00:32:14.709 GoRPCClient: error on JSON-RPC call 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83967 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83967 ']' 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83967 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83967 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83967' 00:32:14.709 killing process with pid 83967 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83967 00:32:14.709 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83967 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.z78ooGyyDB 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84097 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84097 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84097 ']' 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.968 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:14.968 [2024-12-05 11:15:39.482713] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:14.968 [2024-12-05 11:15:39.482813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.226 [2024-12-05 11:15:39.630641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.226 [2024-12-05 11:15:39.682673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.226 [2024-12-05 11:15:39.682727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.226 [2024-12-05 11:15:39.682737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.226 [2024-12-05 11:15:39.682745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.226 [2024-12-05 11:15:39.682752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.226 [2024-12-05 11:15:39.683060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.z78ooGyyDB 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z78ooGyyDB 00:32:15.226 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:15.522 [2024-12-05 11:15:40.095376] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.522 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:15.783 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:16.041 [2024-12-05 11:15:40.659499] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:16.041 [2024-12-05 11:15:40.659746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.041 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:16.608 malloc0 00:32:16.608 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:16.865 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:32:17.123 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84195 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84195 /var/tmp/bdevperf.sock 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84195 ']' 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.382 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:17.382 [2024-12-05 11:15:41.860705] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:17.382 [2024-12-05 11:15:41.860818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84195 ] 00:32:17.382 [2024-12-05 11:15:42.021656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.640 [2024-12-05 11:15:42.086456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.640 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.640 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:17.640 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:32:17.898 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:18.156 [2024-12-05 11:15:42.712995] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:18.156 TLSTESTn1 00:32:18.156 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:18.725 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:32:18.725 "subsystems": [ 00:32:18.725 { 00:32:18.725 "subsystem": "keyring", 00:32:18.725 "config": [ 00:32:18.725 { 00:32:18.725 "method": "keyring_file_add_key", 00:32:18.725 "params": { 00:32:18.725 "name": "key0", 00:32:18.725 "path": "/tmp/tmp.z78ooGyyDB" 00:32:18.725 } 00:32:18.725 } 00:32:18.725 ] 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "subsystem": "iobuf", 00:32:18.725 "config": [ 00:32:18.725 { 00:32:18.725 "method": "iobuf_set_options", 00:32:18.725 "params": { 00:32:18.725 "enable_numa": false, 00:32:18.725 "large_bufsize": 135168, 00:32:18.725 "large_pool_count": 1024, 00:32:18.725 "small_bufsize": 8192, 00:32:18.725 "small_pool_count": 8192 00:32:18.725 } 00:32:18.725 } 00:32:18.725 ] 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "subsystem": "sock", 00:32:18.725 "config": [ 00:32:18.725 { 00:32:18.725 "method": "sock_set_default_impl", 00:32:18.725 "params": { 00:32:18.725 "impl_name": "posix" 00:32:18.725 } 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "method": "sock_impl_set_options", 00:32:18.725 "params": { 00:32:18.725 "enable_ktls": false, 00:32:18.725 "enable_placement_id": 0, 00:32:18.725 "enable_quickack": false, 00:32:18.725 "enable_recv_pipe": true, 00:32:18.725 "enable_zerocopy_send_client": false, 00:32:18.725 "enable_zerocopy_send_server": true, 00:32:18.725 "impl_name": "ssl", 00:32:18.725 "recv_buf_size": 4096, 00:32:18.725 "send_buf_size": 4096, 00:32:18.725 "tls_version": 0, 00:32:18.725 "zerocopy_threshold": 0 00:32:18.725 } 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "method": "sock_impl_set_options", 00:32:18.725 "params": { 00:32:18.725 "enable_ktls": false, 00:32:18.725 "enable_placement_id": 0, 00:32:18.725 "enable_quickack": false, 00:32:18.725 "enable_recv_pipe": true, 00:32:18.725 "enable_zerocopy_send_client": false, 00:32:18.725 "enable_zerocopy_send_server": true, 00:32:18.725 "impl_name": "posix", 00:32:18.725 "recv_buf_size": 2097152, 00:32:18.725 "send_buf_size": 2097152, 00:32:18.725 "tls_version": 0, 00:32:18.725 "zerocopy_threshold": 0 00:32:18.725 } 00:32:18.725 } 00:32:18.725 ] 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "subsystem": "vmd", 00:32:18.725 "config": [] 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "subsystem": "accel", 00:32:18.725 "config": [ 00:32:18.725 { 00:32:18.725 "method": "accel_set_options", 00:32:18.725 "params": { 00:32:18.725 "buf_count": 2048, 00:32:18.725 "large_cache_size": 16, 00:32:18.725 "sequence_count": 2048, 00:32:18.725 "small_cache_size": 128, 00:32:18.725 "task_count": 2048 00:32:18.725 } 00:32:18.725 } 00:32:18.725 ] 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "subsystem": "bdev", 00:32:18.725 "config": [ 00:32:18.725 { 00:32:18.725 "method": "bdev_set_options", 00:32:18.725 "params": { 00:32:18.725 "bdev_auto_examine": true, 00:32:18.725 "bdev_io_cache_size": 256, 00:32:18.725 "bdev_io_pool_size": 65535, 00:32:18.725 "iobuf_large_cache_size": 16, 00:32:18.725 "iobuf_small_cache_size": 128 00:32:18.725 } 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "method": "bdev_raid_set_options", 00:32:18.725 "params": { 00:32:18.725 "process_max_bandwidth_mb_sec": 0, 00:32:18.725 "process_window_size_kb": 1024 00:32:18.725 } 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "method": "bdev_iscsi_set_options", 00:32:18.725 "params": { 00:32:18.725 "timeout_sec": 30 00:32:18.725 } 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "method": "bdev_nvme_set_options", 00:32:18.725 "params": { 00:32:18.725 "action_on_timeout": "none", 00:32:18.725 "allow_accel_sequence": false, 00:32:18.725 "arbitration_burst": 0, 00:32:18.725 "bdev_retry_count": 3, 00:32:18.725 "ctrlr_loss_timeout_sec": 0, 00:32:18.725 "delay_cmd_submit": true, 00:32:18.725 "dhchap_dhgroups": [ 00:32:18.725 "null", 00:32:18.725 "ffdhe2048", 00:32:18.725 "ffdhe3072", 00:32:18.725 "ffdhe4096", 00:32:18.725 "ffdhe6144", 00:32:18.725 "ffdhe8192" 00:32:18.725 ], 00:32:18.725 "dhchap_digests": [ 00:32:18.725 "sha256", 00:32:18.725 "sha384", 00:32:18.725 "sha512" 00:32:18.725 ], 00:32:18.725 "disable_auto_failback": false, 00:32:18.725 "fast_io_fail_timeout_sec": 0, 00:32:18.725 "generate_uuids": false, 00:32:18.725 "high_priority_weight": 0, 00:32:18.725 "io_path_stat": false, 00:32:18.725 "io_queue_requests": 0, 00:32:18.725 "keep_alive_timeout_ms": 10000, 00:32:18.725 "low_priority_weight": 0, 00:32:18.725 "medium_priority_weight": 0, 00:32:18.725 "nvme_adminq_poll_period_us": 10000, 00:32:18.725 "nvme_error_stat": false, 00:32:18.725 "nvme_ioq_poll_period_us": 0, 00:32:18.725 "rdma_cm_event_timeout_ms": 0, 00:32:18.725 "rdma_max_cq_size": 0, 00:32:18.725 "rdma_srq_size": 0, 00:32:18.725 "reconnect_delay_sec": 0, 00:32:18.725 "timeout_admin_us": 0, 00:32:18.725 "timeout_us": 0, 00:32:18.725 "transport_ack_timeout": 0, 00:32:18.725 "transport_retry_count": 4, 00:32:18.725 "transport_tos": 0 00:32:18.725 } 00:32:18.725 }, 00:32:18.725 { 00:32:18.725 "method": "bdev_nvme_set_hotplug", 00:32:18.725 "params": { 00:32:18.725 "enable": false, 00:32:18.725 "period_us": 100000 00:32:18.725 } 00:32:18.725 }, 00:32:18.725 { 00:32:18.726 "method": "bdev_malloc_create", 00:32:18.726 "params": { 00:32:18.726 "block_size": 4096, 00:32:18.726 "dif_is_head_of_md": false, 00:32:18.726 "dif_pi_format": 0, 00:32:18.726 "dif_type": 0, 00:32:18.726 "md_size": 0, 00:32:18.726 "name": "malloc0", 00:32:18.726 "num_blocks": 8192, 00:32:18.726 "optimal_io_boundary": 0, 00:32:18.726 "physical_block_size": 4096, 00:32:18.726 "uuid": "7775c24e-8e5e-4f43-9956-b9208ae90724" 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "bdev_wait_for_examine" 00:32:18.726 } 00:32:18.726 ] 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "subsystem": "nbd", 00:32:18.726 "config": [] 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "subsystem": "scheduler", 00:32:18.726 "config": [ 00:32:18.726 { 00:32:18.726 "method": "framework_set_scheduler", 00:32:18.726 "params": { 00:32:18.726 "name": "static" 00:32:18.726 } 00:32:18.726 } 00:32:18.726 ] 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "subsystem": "nvmf", 00:32:18.726 "config": [ 00:32:18.726 { 00:32:18.726 "method": "nvmf_set_config", 00:32:18.726 "params": { 00:32:18.726 "admin_cmd_passthru": { 00:32:18.726 "identify_ctrlr": false 00:32:18.726 }, 00:32:18.726 "dhchap_dhgroups": [ 00:32:18.726 "null", 00:32:18.726 "ffdhe2048", 00:32:18.726 "ffdhe3072", 00:32:18.726 "ffdhe4096", 00:32:18.726 "ffdhe6144", 00:32:18.726 "ffdhe8192" 00:32:18.726 ], 00:32:18.726 "dhchap_digests": [ 00:32:18.726 "sha256", 00:32:18.726 "sha384", 00:32:18.726 "sha512" 00:32:18.726 ], 00:32:18.726 "discovery_filter": "match_any" 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "nvmf_set_max_subsystems", 00:32:18.726 "params": { 00:32:18.726 "max_subsystems": 1024 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "nvmf_set_crdt", 00:32:18.726 "params": { 00:32:18.726 "crdt1": 0, 00:32:18.726 "crdt2": 0, 00:32:18.726 "crdt3": 0 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "nvmf_create_transport", 00:32:18.726 "params": { 00:32:18.726 "abort_timeout_sec": 1, 00:32:18.726 "ack_timeout": 0, 00:32:18.726 "buf_cache_size": 4294967295, 00:32:18.726 "c2h_success": false, 00:32:18.726 "data_wr_pool_size": 0, 00:32:18.726 "dif_insert_or_strip": false, 00:32:18.726 "in_capsule_data_size": 4096, 00:32:18.726 "io_unit_size": 131072, 00:32:18.726 "max_aq_depth": 128, 00:32:18.726 "max_io_qpairs_per_ctrlr": 127, 00:32:18.726 "max_io_size": 131072, 00:32:18.726 "max_queue_depth": 128, 00:32:18.726 "num_shared_buffers": 511, 00:32:18.726 "sock_priority": 0, 00:32:18.726 "trtype": "TCP", 00:32:18.726 "zcopy": false 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "nvmf_create_subsystem", 00:32:18.726 "params": { 00:32:18.726 "allow_any_host": false, 00:32:18.726 "ana_reporting": false, 00:32:18.726 "max_cntlid": 65519, 00:32:18.726 "max_namespaces": 10, 00:32:18.726 "min_cntlid": 1, 00:32:18.726 "model_number": "SPDK bdev Controller", 00:32:18.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.726 "serial_number": "SPDK00000000000001" 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "nvmf_subsystem_add_host", 00:32:18.726 "params": { 00:32:18.726 "host": "nqn.2016-06.io.spdk:host1", 00:32:18.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.726 "psk": "key0" 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "nvmf_subsystem_add_ns", 00:32:18.726 "params": { 00:32:18.726 "namespace": { 00:32:18.726 "bdev_name": "malloc0", 00:32:18.726 "nguid": "7775C24E8E5E4F439956B9208AE90724", 00:32:18.726 "no_auto_visible": false, 00:32:18.726 "nsid": 1, 00:32:18.726 "uuid": "7775c24e-8e5e-4f43-9956-b9208ae90724" 00:32:18.726 }, 00:32:18.726 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:18.726 } 00:32:18.726 }, 00:32:18.726 { 00:32:18.726 "method": "nvmf_subsystem_add_listener", 00:32:18.726 "params": { 00:32:18.726 "listen_address": { 00:32:18.726 "adrfam": "IPv4", 00:32:18.726 "traddr": "10.0.0.2", 00:32:18.726 "trsvcid": "4420", 00:32:18.726 "trtype": "TCP" 00:32:18.726 }, 00:32:18.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.726 "secure_channel": true 00:32:18.726 } 00:32:18.726 } 00:32:18.726 ] 00:32:18.726 } 00:32:18.726 ] 00:32:18.726 }' 00:32:18.726 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:32:18.985 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:32:18.985 "subsystems": [ 00:32:18.985 { 00:32:18.985 "subsystem": "keyring", 00:32:18.985 "config": [ 00:32:18.985 { 00:32:18.985 "method": "keyring_file_add_key", 00:32:18.985 "params": { 00:32:18.985 "name": "key0", 00:32:18.985 "path": "/tmp/tmp.z78ooGyyDB" 00:32:18.985 } 00:32:18.985 } 00:32:18.985 ] 00:32:18.985 }, 00:32:18.985 { 00:32:18.985 "subsystem": "iobuf", 00:32:18.985 "config": [ 00:32:18.985 { 00:32:18.985 "method": "iobuf_set_options", 00:32:18.985 "params": { 00:32:18.985 "enable_numa": false, 00:32:18.985 "large_bufsize": 135168, 00:32:18.985 "large_pool_count": 1024, 00:32:18.985 "small_bufsize": 8192, 00:32:18.985 "small_pool_count": 8192 00:32:18.985 } 00:32:18.985 } 00:32:18.985 ] 00:32:18.985 }, 00:32:18.985 { 00:32:18.985 "subsystem": "sock", 00:32:18.985 "config": [ 00:32:18.985 { 00:32:18.985 "method": "sock_set_default_impl", 00:32:18.985 "params": { 00:32:18.985 "impl_name": "posix" 00:32:18.985 } 00:32:18.985 }, 00:32:18.985 { 00:32:18.985 "method": "sock_impl_set_options", 00:32:18.985 "params": { 00:32:18.985 "enable_ktls": false, 00:32:18.985 "enable_placement_id": 0, 00:32:18.985 "enable_quickack": false, 00:32:18.985 "enable_recv_pipe": true, 00:32:18.985 "enable_zerocopy_send_client": false, 00:32:18.985 "enable_zerocopy_send_server": true, 00:32:18.985 "impl_name": "ssl", 00:32:18.985 "recv_buf_size": 4096, 00:32:18.985 "send_buf_size": 4096, 00:32:18.985 "tls_version": 0, 00:32:18.985 "zerocopy_threshold": 0 00:32:18.985 } 00:32:18.985 }, 00:32:18.985 { 00:32:18.985 "method": "sock_impl_set_options", 00:32:18.985 "params": { 00:32:18.985 "enable_ktls": false, 00:32:18.985 "enable_placement_id": 0, 00:32:18.985 "enable_quickack": false, 00:32:18.985 "enable_recv_pipe": true, 00:32:18.985 "enable_zerocopy_send_client": false, 00:32:18.985 "enable_zerocopy_send_server": true, 00:32:18.985 "impl_name": "posix", 00:32:18.985 "recv_buf_size": 2097152, 00:32:18.985 "send_buf_size": 2097152, 00:32:18.985 "tls_version": 0, 00:32:18.985 "zerocopy_threshold": 0 00:32:18.985 } 00:32:18.985 } 00:32:18.985 ] 00:32:18.985 }, 00:32:18.985 { 00:32:18.985 "subsystem": "vmd", 00:32:18.985 "config": [] 00:32:18.985 }, 00:32:18.985 { 00:32:18.985 "subsystem": "accel", 00:32:18.985 "config": [ 00:32:18.985 { 00:32:18.985 "method": "accel_set_options", 00:32:18.985 "params": { 00:32:18.985 "buf_count": 2048, 00:32:18.985 "large_cache_size": 16, 00:32:18.985 "sequence_count": 2048, 00:32:18.985 "small_cache_size": 128, 00:32:18.985 "task_count": 2048 00:32:18.985 } 00:32:18.985 } 00:32:18.985 ] 00:32:18.985 }, 00:32:18.985 { 00:32:18.985 "subsystem": "bdev", 00:32:18.985 "config": [ 00:32:18.986 { 00:32:18.986 "method": "bdev_set_options", 00:32:18.986 "params": { 00:32:18.986 "bdev_auto_examine": true, 00:32:18.986 "bdev_io_cache_size": 256, 00:32:18.986 "bdev_io_pool_size": 65535, 00:32:18.986 "iobuf_large_cache_size": 16, 00:32:18.986 "iobuf_small_cache_size": 128 00:32:18.986 } 00:32:18.986 }, 00:32:18.986 { 00:32:18.986 "method": "bdev_raid_set_options", 00:32:18.986 "params": { 00:32:18.986 "process_max_bandwidth_mb_sec": 0, 00:32:18.986 "process_window_size_kb": 1024 00:32:18.986 } 00:32:18.986 }, 00:32:18.986 { 00:32:18.986 "method": "bdev_iscsi_set_options", 00:32:18.986 "params": { 00:32:18.986 "timeout_sec": 30 00:32:18.986 } 00:32:18.986 }, 00:32:18.986 { 00:32:18.986 "method": "bdev_nvme_set_options", 00:32:18.986 "params": { 00:32:18.986 "action_on_timeout": "none", 00:32:18.986 "allow_accel_sequence": false, 00:32:18.986 "arbitration_burst": 0, 00:32:18.986 "bdev_retry_count": 3, 00:32:18.986 "ctrlr_loss_timeout_sec": 0, 00:32:18.986 "delay_cmd_submit": true, 00:32:18.986 "dhchap_dhgroups": [ 00:32:18.986 "null", 00:32:18.986 "ffdhe2048", 00:32:18.986 "ffdhe3072", 00:32:18.986 "ffdhe4096", 00:32:18.986 "ffdhe6144", 00:32:18.986 "ffdhe8192" 00:32:18.986 ], 00:32:18.986 "dhchap_digests": [ 00:32:18.986 "sha256", 00:32:18.986 "sha384", 00:32:18.986 "sha512" 00:32:18.986 ], 00:32:18.986 "disable_auto_failback": false, 00:32:18.986 "fast_io_fail_timeout_sec": 0, 00:32:18.986 "generate_uuids": false, 00:32:18.986 "high_priority_weight": 0, 00:32:18.986 "io_path_stat": false, 00:32:18.986 "io_queue_requests": 512, 00:32:18.986 "keep_alive_timeout_ms": 10000, 00:32:18.986 "low_priority_weight": 0, 00:32:18.986 "medium_priority_weight": 0, 00:32:18.986 "nvme_adminq_poll_period_us": 10000, 00:32:18.986 "nvme_error_stat": false, 00:32:18.986 "nvme_ioq_poll_period_us": 0, 00:32:18.986 "rdma_cm_event_timeout_ms": 0, 00:32:18.986 "rdma_max_cq_size": 0, 00:32:18.986 "rdma_srq_size": 0, 00:32:18.986 "reconnect_delay_sec": 0, 00:32:18.986 "timeout_admin_us": 0, 00:32:18.986 "timeout_us": 0, 00:32:18.986 "transport_ack_timeout": 0, 00:32:18.986 "transport_retry_count": 4, 00:32:18.986 "transport_tos": 0 00:32:18.986 } 00:32:18.986 }, 00:32:18.986 { 00:32:18.986 "method": "bdev_nvme_attach_controller", 00:32:18.986 "params": { 00:32:18.986 "adrfam": "IPv4", 00:32:18.986 "ctrlr_loss_timeout_sec": 0, 00:32:18.986 "ddgst": false, 00:32:18.986 "fast_io_fail_timeout_sec": 0, 00:32:18.986 "hdgst": false, 00:32:18.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:18.986 "multipath": "multipath", 00:32:18.986 "name": "TLSTEST", 00:32:18.986 "prchk_guard": false, 00:32:18.986 "prchk_reftag": false, 00:32:18.986 "psk": "key0", 00:32:18.986 "reconnect_delay_sec": 0, 00:32:18.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.986 "traddr": "10.0.0.2", 00:32:18.986 "trsvcid": "4420", 00:32:18.986 "trtype": "TCP" 00:32:18.986 } 00:32:18.986 }, 00:32:18.986 { 00:32:18.986 "method": "bdev_nvme_set_hotplug", 00:32:18.986 "params": { 00:32:18.986 "enable": false, 00:32:18.986 "period_us": 100000 00:32:18.986 } 00:32:18.986 }, 00:32:18.986 { 00:32:18.986 "method": "bdev_wait_for_examine" 00:32:18.986 } 00:32:18.986 ] 00:32:18.986 }, 00:32:18.986 { 00:32:18.986 "subsystem": "nbd", 00:32:18.986 "config": [] 00:32:18.986 } 00:32:18.986 ] 00:32:18.986 }' 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84195 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84195 ']' 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84195 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84195 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:18.986 killing process with pid 84195 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84195' 00:32:18.986 Received shutdown signal, test time was about 10.000000 seconds 00:32:18.986 00:32:18.986 Latency(us) 00:32:18.986 [2024-12-05T11:15:43.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.986 [2024-12-05T11:15:43.638Z] =================================================================================================================== 00:32:18.986 [2024-12-05T11:15:43.638Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84195 00:32:18.986 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84195 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84097 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84097 ']' 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84097 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84097 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:19.246 killing process with pid 84097 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84097' 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84097 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84097 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:32:19.246 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:32:19.246 "subsystems": [ 00:32:19.246 { 00:32:19.246 "subsystem": "keyring", 00:32:19.246 "config": [ 00:32:19.246 { 00:32:19.246 "method": "keyring_file_add_key", 00:32:19.246 "params": { 00:32:19.246 "name": "key0", 00:32:19.246 "path": "/tmp/tmp.z78ooGyyDB" 00:32:19.246 } 00:32:19.246 } 00:32:19.246 ] 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "subsystem": "iobuf", 00:32:19.246 "config": [ 00:32:19.246 { 00:32:19.246 "method": "iobuf_set_options", 00:32:19.246 "params": { 00:32:19.246 "enable_numa": false, 00:32:19.246 "large_bufsize": 135168, 00:32:19.246 "large_pool_count": 1024, 00:32:19.246 "small_bufsize": 8192, 00:32:19.246 "small_pool_count": 8192 00:32:19.246 } 00:32:19.246 } 00:32:19.246 ] 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "subsystem": "sock", 00:32:19.246 "config": [ 00:32:19.246 { 00:32:19.246 "method": "sock_set_default_impl", 00:32:19.246 "params": { 00:32:19.246 "impl_name": "posix" 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "sock_impl_set_options", 00:32:19.246 "params": { 00:32:19.246 "enable_ktls": false, 00:32:19.246 "enable_placement_id": 0, 00:32:19.246 "enable_quickack": false, 00:32:19.246 "enable_recv_pipe": true, 00:32:19.246 "enable_zerocopy_send_client": false, 00:32:19.246 "enable_zerocopy_send_server": true, 00:32:19.246 "impl_name": "ssl", 00:32:19.246 "recv_buf_size": 4096, 00:32:19.246 "send_buf_size": 4096, 00:32:19.246 "tls_version": 0, 00:32:19.246 "zerocopy_threshold": 0 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "sock_impl_set_options", 00:32:19.246 "params": { 00:32:19.246 "enable_ktls": false, 00:32:19.246 "enable_placement_id": 0, 00:32:19.246 "enable_quickack": false, 00:32:19.246 "enable_recv_pipe": true, 00:32:19.246 "enable_zerocopy_send_client": false, 00:32:19.246 "enable_zerocopy_send_server": true, 00:32:19.246 "impl_name": "posix", 00:32:19.246 "recv_buf_size": 2097152, 00:32:19.246 "send_buf_size": 2097152, 00:32:19.246 "tls_version": 0, 00:32:19.246 "zerocopy_threshold": 0 00:32:19.246 } 00:32:19.246 } 00:32:19.246 ] 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "subsystem": "vmd", 00:32:19.246 "config": [] 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "subsystem": "accel", 00:32:19.246 "config": [ 00:32:19.246 { 00:32:19.246 "method": "accel_set_options", 00:32:19.246 "params": { 00:32:19.246 "buf_count": 2048, 00:32:19.246 "large_cache_size": 16, 00:32:19.246 "sequence_count": 2048, 00:32:19.246 "small_cache_size": 128, 00:32:19.246 "task_count": 2048 00:32:19.246 } 00:32:19.246 } 00:32:19.246 ] 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "subsystem": "bdev", 00:32:19.246 "config": [ 00:32:19.246 { 00:32:19.246 "method": "bdev_set_options", 00:32:19.246 "params": { 00:32:19.246 "bdev_auto_examine": true, 00:32:19.246 "bdev_io_cache_size": 256, 00:32:19.246 "bdev_io_pool_size": 65535, 00:32:19.246 "iobuf_large_cache_size": 16, 00:32:19.246 "iobuf_small_cache_size": 128 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "bdev_raid_set_options", 00:32:19.246 "params": { 00:32:19.246 "process_max_bandwidth_mb_sec": 0, 00:32:19.246 "process_window_size_kb": 1024 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "bdev_iscsi_set_options", 00:32:19.246 "params": { 00:32:19.246 "timeout_sec": 30 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "bdev_nvme_set_options", 00:32:19.246 "params": { 00:32:19.246 "action_on_timeout": "none", 00:32:19.246 "allow_accel_sequence": false, 00:32:19.246 "arbitration_burst": 0, 00:32:19.246 "bdev_retry_count": 3, 00:32:19.246 "ctrlr_loss_timeout_sec": 0, 00:32:19.246 "delay_cmd_submit": true, 00:32:19.246 "dhchap_dhgroups": [ 00:32:19.246 "null", 00:32:19.246 "ffdhe2048", 00:32:19.246 "ffdhe3072", 00:32:19.246 "ffdhe4096", 00:32:19.246 "ffdhe6144", 00:32:19.246 "ffdhe8192" 00:32:19.246 ], 00:32:19.246 "dhchap_digests": [ 00:32:19.246 "sha256", 00:32:19.246 "sha384", 00:32:19.246 "sha512" 00:32:19.246 ], 00:32:19.246 "disable_auto_failback": false, 00:32:19.246 "fast_io_fail_timeout_sec": 0, 00:32:19.246 "generate_uuids": false, 00:32:19.246 "high_priority_weight": 0, 00:32:19.246 "io_path_stat": false, 00:32:19.246 "io_queue_requests": 0, 00:32:19.246 "keep_alive_timeout_ms": 10000, 00:32:19.246 "low_priority_weight": 0, 00:32:19.246 "medium_priority_weight": 0, 00:32:19.246 "nvme_adminq_poll_period_us": 10000, 00:32:19.246 "nvme_error_stat": false, 00:32:19.246 "nvme_ioq_poll_period_us": 0, 00:32:19.246 "rdma_cm_event_timeout_ms": 0, 00:32:19.246 "rdma_max_cq_size": 0, 00:32:19.246 "rdma_srq_size": 0, 00:32:19.246 "reconnect_delay_sec": 0, 00:32:19.246 "timeout_admin_us": 0, 00:32:19.246 "timeout_us": 0, 00:32:19.246 "transport_ack_timeout": 0, 00:32:19.246 "transport_retry_count": 4, 00:32:19.246 "transport_tos": 0 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "bdev_nvme_set_hotplug", 00:32:19.246 "params": { 00:32:19.246 "enable": false, 00:32:19.246 "period_us": 100000 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "bdev_malloc_create", 00:32:19.246 "params": { 00:32:19.246 "block_size": 4096, 00:32:19.246 "dif_is_head_of_md": false, 00:32:19.246 "dif_pi_format": 0, 00:32:19.246 "dif_type": 0, 00:32:19.246 "md_size": 0, 00:32:19.246 "name": "malloc0", 00:32:19.246 "num_blocks": 8192, 00:32:19.246 "optimal_io_boundary": 0, 00:32:19.246 "physical_block_size": 4096, 00:32:19.246 "uuid": "7775c24e-8e5e-4f43-9956-b9208ae90724" 00:32:19.246 } 00:32:19.246 }, 00:32:19.246 { 00:32:19.246 "method": "bdev_wait_for_examine" 00:32:19.246 } 00:32:19.247 ] 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "subsystem": "nbd", 00:32:19.247 "config": [] 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "subsystem": "scheduler", 00:32:19.247 "config": [ 00:32:19.247 { 00:32:19.247 "method": "framework_set_scheduler", 00:32:19.247 "params": { 00:32:19.247 "name": "static" 00:32:19.247 } 00:32:19.247 } 00:32:19.247 ] 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "subsystem": "nvmf", 00:32:19.247 "config": [ 00:32:19.247 { 00:32:19.247 "method": "nvmf_set_config", 00:32:19.247 "params": { 00:32:19.247 "admin_cmd_passthru": { 00:32:19.247 "identify_ctrlr": false 00:32:19.247 }, 00:32:19.247 "dhchap_dhgroups": [ 00:32:19.247 "null", 00:32:19.247 "ffdhe2048", 00:32:19.247 "ffdhe3072", 00:32:19.247 "ffdhe4096", 00:32:19.247 "ffdhe6144", 00:32:19.247 "ffdhe8192" 00:32:19.247 ], 00:32:19.247 "dhchap_digests": [ 00:32:19.247 "sha256", 00:32:19.247 "sha384", 00:32:19.247 "sha512" 00:32:19.247 ], 00:32:19.247 "discovery_filter": "match_any" 00:32:19.247 } 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "method": "nvmf_set_max_subsystems", 00:32:19.247 "params": { 00:32:19.247 "max_subsystems": 1024 00:32:19.247 } 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "method": "nvmf_set_crdt", 00:32:19.247 "params": { 00:32:19.247 "crdt1": 0, 00:32:19.247 "crdt2": 0, 00:32:19.247 "crdt3": 0 00:32:19.247 } 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "method": "nvmf_create_transport", 00:32:19.247 "params": { 00:32:19.247 "abort_timeout_sec": 1, 00:32:19.247 "ack_timeout": 0, 00:32:19.247 "buf_cache_size": 4294967295, 00:32:19.247 "c2h_success": false, 00:32:19.247 "data_wr_pool_size": 0, 00:32:19.247 "dif_insert_or_strip": false, 00:32:19.247 "in_capsule_data_size": 4096, 00:32:19.247 "io_unit_size": 131072, 00:32:19.247 "max_aq_depth": 128, 00:32:19.247 "max_io_qpairs_per_ctrlr": 127, 00:32:19.247 "max_io_size": 131072, 00:32:19.247 "max_queue_depth": 128, 00:32:19.247 "num_shared_buffers": 511, 00:32:19.247 "sock_priority": 0, 00:32:19.247 "trtype": "TCP", 00:32:19.247 "zcopy": false 00:32:19.247 } 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "method": "nvmf_create_subsystem", 00:32:19.247 "params": { 00:32:19.247 "allow_any_host": false, 00:32:19.247 "ana_reporting": false, 00:32:19.247 "max_cntlid": 65519, 00:32:19.247 "max_namespaces": 10, 00:32:19.247 "min_cntlid": 1, 00:32:19.247 "model_number": "SPDK bdev Controller", 00:32:19.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:19.247 "serial_number": "SPDK00000000000001" 00:32:19.247 } 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "method": "nvmf_subsystem_add_host", 00:32:19.247 "params": { 00:32:19.247 "host": "nqn.2016-06.io.spdk:host1", 00:32:19.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:19.247 "psk": "key0" 00:32:19.247 } 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "method": "nvmf_subsystem_add_ns", 00:32:19.247 "params": { 00:32:19.247 "namespace": { 00:32:19.247 "bdev_name": "malloc0", 00:32:19.247 "nguid": "7775C24E8E5E4F439956B9208AE90724", 00:32:19.247 "no_auto_visible": false, 00:32:19.247 "nsid": 1, 00:32:19.247 "uuid": "7775c24e-8e5e-4f43-9956-b9208ae90724" 00:32:19.247 }, 00:32:19.247 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:19.247 } 00:32:19.247 }, 00:32:19.247 { 00:32:19.247 "method": "nvmf_subsystem_add_listener", 00:32:19.247 "params": { 00:32:19.247 "listen_address": { 00:32:19.247 "adrfam": "IPv4", 00:32:19.247 "traddr": "10.0.0.2", 00:32:19.247 "trsvcid": "4420", 00:32:19.247 "trtype": "TCP" 00:32:19.247 }, 00:32:19.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:19.247 "secure_channel": true 00:32:19.247 } 00:32:19.247 } 00:32:19.247 ] 00:32:19.247 } 00:32:19.247 ] 00:32:19.247 }' 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84266 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84266 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84266 ']' 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:19.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:19.247 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:19.506 [2024-12-05 11:15:43.948907] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:19.506 [2024-12-05 11:15:43.949002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.506 [2024-12-05 11:15:44.096478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.506 [2024-12-05 11:15:44.149081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.506 [2024-12-05 11:15:44.149167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.506 [2024-12-05 11:15:44.149178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.506 [2024-12-05 11:15:44.149187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.506 [2024-12-05 11:15:44.149195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.506 [2024-12-05 11:15:44.149537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.765 [2024-12-05 11:15:44.367830] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.765 [2024-12-05 11:15:44.399777] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:19.765 [2024-12-05 11:15:44.399982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84310 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84310 /var/tmp/bdevperf.sock 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84310 ']' 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:20.726 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:32:20.726 "subsystems": [ 00:32:20.726 { 00:32:20.726 "subsystem": "keyring", 00:32:20.726 "config": [ 00:32:20.726 { 00:32:20.726 "method": "keyring_file_add_key", 00:32:20.726 "params": { 00:32:20.726 "name": "key0", 00:32:20.726 "path": "/tmp/tmp.z78ooGyyDB" 00:32:20.726 } 00:32:20.726 } 00:32:20.726 ] 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "subsystem": "iobuf", 00:32:20.726 "config": [ 00:32:20.726 { 00:32:20.726 "method": "iobuf_set_options", 00:32:20.726 "params": { 00:32:20.726 "enable_numa": false, 00:32:20.726 "large_bufsize": 135168, 00:32:20.726 "large_pool_count": 1024, 00:32:20.726 "small_bufsize": 8192, 00:32:20.726 "small_pool_count": 8192 00:32:20.726 } 00:32:20.726 } 00:32:20.726 ] 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "subsystem": "sock", 00:32:20.726 "config": [ 00:32:20.726 { 00:32:20.726 "method": "sock_set_default_impl", 00:32:20.726 "params": { 00:32:20.726 "impl_name": "posix" 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "sock_impl_set_options", 00:32:20.726 "params": { 00:32:20.726 "enable_ktls": false, 00:32:20.726 "enable_placement_id": 0, 00:32:20.726 "enable_quickack": false, 00:32:20.726 "enable_recv_pipe": true, 00:32:20.726 "enable_zerocopy_send_client": false, 00:32:20.726 "enable_zerocopy_send_server": true, 00:32:20.726 "impl_name": "ssl", 00:32:20.726 "recv_buf_size": 4096, 00:32:20.726 "send_buf_size": 4096, 00:32:20.726 "tls_version": 0, 00:32:20.726 "zerocopy_threshold": 0 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "sock_impl_set_options", 00:32:20.726 "params": { 00:32:20.726 "enable_ktls": false, 00:32:20.726 "enable_placement_id": 0, 00:32:20.726 "enable_quickack": false, 00:32:20.726 "enable_recv_pipe": true, 00:32:20.726 "enable_zerocopy_send_client": false, 00:32:20.726 "enable_zerocopy_send_server": true, 00:32:20.726 "impl_name": "posix", 00:32:20.726 "recv_buf_size": 2097152, 00:32:20.726 "send_buf_size": 2097152, 00:32:20.726 "tls_version": 0, 00:32:20.726 "zerocopy_threshold": 0 00:32:20.726 } 00:32:20.726 } 00:32:20.726 ] 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "subsystem": "vmd", 00:32:20.726 "config": [] 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "subsystem": "accel", 00:32:20.726 "config": [ 00:32:20.726 { 00:32:20.726 "method": "accel_set_options", 00:32:20.726 "params": { 00:32:20.726 "buf_count": 2048, 00:32:20.726 "large_cache_size": 16, 00:32:20.726 "sequence_count": 2048, 00:32:20.726 "small_cache_size": 128, 00:32:20.726 "task_count": 2048 00:32:20.726 } 00:32:20.726 } 00:32:20.726 ] 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "subsystem": "bdev", 00:32:20.726 "config": [ 00:32:20.726 { 00:32:20.726 "method": "bdev_set_options", 00:32:20.726 "params": { 00:32:20.726 "bdev_auto_examine": true, 00:32:20.726 "bdev_io_cache_size": 256, 00:32:20.726 "bdev_io_pool_size": 65535, 00:32:20.726 "iobuf_large_cache_size": 16, 00:32:20.726 "iobuf_small_cache_size": 128 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "bdev_raid_set_options", 00:32:20.726 "params": { 00:32:20.726 "process_max_bandwidth_mb_sec": 0, 00:32:20.726 "process_window_size_kb": 1024 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "bdev_iscsi_set_options", 00:32:20.726 "params": { 00:32:20.726 "timeout_sec": 30 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "bdev_nvme_set_options", 00:32:20.726 "params": { 00:32:20.726 "action_on_timeout": "none", 00:32:20.726 "allow_accel_sequence": false, 00:32:20.726 "arbitration_burst": 0, 00:32:20.726 "bdev_retry_count": 3, 00:32:20.726 "ctrlr_loss_timeout_sec": 0, 00:32:20.726 "delay_cmd_submit": true, 00:32:20.726 "dhchap_dhgroups": [ 00:32:20.726 "null", 00:32:20.726 "ffdhe2048", 00:32:20.726 "ffdhe3072", 00:32:20.726 "ffdhe4096", 00:32:20.726 "ffdhe6144", 00:32:20.726 "ffdhe8192" 00:32:20.726 ], 00:32:20.726 "dhchap_digests": [ 00:32:20.726 "sha256", 00:32:20.726 "sha384", 00:32:20.726 "sha512" 00:32:20.726 ], 00:32:20.726 "disable_auto_failback": false, 00:32:20.726 "fast_io_fail_timeout_sec": 0, 00:32:20.726 "generate_uuids": false, 00:32:20.726 "high_priority_weight": 0, 00:32:20.726 "io_path_stat": false, 00:32:20.726 "io_queue_requests": 512, 00:32:20.726 "keep_alive_timeout_ms": 10000, 00:32:20.726 "low_priority_weight": 0, 00:32:20.726 "medium_priority_weight": 0, 00:32:20.726 "nvme_adminq_poll_period_us": 10000, 00:32:20.726 "nvme_error_stat": false, 00:32:20.726 "nvme_ioq_poll_period_us": 0, 00:32:20.726 "rdma_cm_event_timeout_ms": 0, 00:32:20.726 "rdma_max_cq_size": 0, 00:32:20.726 "rdma_srq_size": 0, 00:32:20.726 "reconnect_delay_sec": 0, 00:32:20.726 "timeout_admin_us": 0, 00:32:20.726 "timeout_us": 0, 00:32:20.726 "transport_ack_timeout": 0, 00:32:20.726 "transport_retry_count": 4, 00:32:20.726 "transport_tos": 0 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "bdev_nvme_attach_controller", 00:32:20.726 "params": { 00:32:20.726 "adrfam": "IPv4", 00:32:20.726 "ctrlr_loss_timeout_sec": 0, 00:32:20.726 "ddgst": false, 00:32:20.726 "fast_io_fail_timeout_sec": 0, 00:32:20.726 "hdgst": false, 00:32:20.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:20.726 "multipath": "multipath", 00:32:20.726 "name": "TLSTEST", 00:32:20.726 "prchk_guard": false, 00:32:20.726 "prchk_reftag": false, 00:32:20.726 "psk": "key0", 00:32:20.726 "reconnect_delay_sec": 0, 00:32:20.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.726 "traddr": "10.0.0.2", 00:32:20.726 "trsvcid": "4420", 00:32:20.726 "trtype": "TCP" 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "bdev_nvme_set_hotplug", 00:32:20.726 "params": { 00:32:20.726 "enable": false, 00:32:20.726 "period_us": 100000 00:32:20.726 } 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "method": "bdev_wait_for_examine" 00:32:20.726 } 00:32:20.726 ] 00:32:20.726 }, 00:32:20.726 { 00:32:20.726 "subsystem": "nbd", 00:32:20.726 "config": [] 00:32:20.726 } 00:32:20.726 ] 00:32:20.726 }' 00:32:20.726 [2024-12-05 11:15:45.132651] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:20.726 [2024-12-05 11:15:45.132751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84310 ] 00:32:20.726 [2024-12-05 11:15:45.290565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.726 [2024-12-05 11:15:45.353737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.985 [2024-12-05 11:15:45.515460] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:21.551 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.551 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:21.551 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:32:21.809 Running I/O for 10 seconds... 00:32:23.703 5521.00 IOPS, 21.57 MiB/s [2024-12-05T11:15:49.290Z] 5453.50 IOPS, 21.30 MiB/s [2024-12-05T11:15:50.223Z] 5398.33 IOPS, 21.09 MiB/s [2024-12-05T11:15:51.601Z] 5421.00 IOPS, 21.18 MiB/s [2024-12-05T11:15:52.544Z] 5457.80 IOPS, 21.32 MiB/s [2024-12-05T11:15:53.489Z] 5461.50 IOPS, 21.33 MiB/s [2024-12-05T11:15:54.428Z] 5473.00 IOPS, 21.38 MiB/s [2024-12-05T11:15:55.365Z] 5484.88 IOPS, 21.43 MiB/s [2024-12-05T11:15:56.302Z] 5415.22 IOPS, 21.15 MiB/s [2024-12-05T11:15:56.302Z] 5437.40 IOPS, 21.24 MiB/s 00:32:31.650 Latency(us) 00:32:31.650 [2024-12-05T11:15:56.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.650 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:31.650 Verification LBA range: start 0x0 length 0x2000 00:32:31.650 TLSTESTn1 : 10.01 5444.06 21.27 0.00 0.00 23476.53 3822.93 30333.81 00:32:31.650 [2024-12-05T11:15:56.302Z] =================================================================================================================== 00:32:31.650 [2024-12-05T11:15:56.302Z] Total : 5444.06 21.27 0.00 0.00 23476.53 3822.93 30333.81 00:32:31.650 { 00:32:31.650 "results": [ 00:32:31.650 { 00:32:31.650 "job": "TLSTESTn1", 00:32:31.650 "core_mask": "0x4", 00:32:31.650 "workload": "verify", 00:32:31.650 "status": "finished", 00:32:31.650 "verify_range": { 00:32:31.650 "start": 0, 00:32:31.650 "length": 8192 00:32:31.650 }, 00:32:31.650 "queue_depth": 128, 00:32:31.650 "io_size": 4096, 00:32:31.650 "runtime": 10.010917, 00:32:31.650 "iops": 5444.0567232752, 00:32:31.650 "mibps": 21.26584657529375, 00:32:31.650 "io_failed": 0, 00:32:31.650 "io_timeout": 0, 00:32:31.650 "avg_latency_us": 23476.532246815204, 00:32:31.650 "min_latency_us": 3822.9333333333334, 00:32:31.650 "max_latency_us": 30333.805714285714 00:32:31.650 } 00:32:31.650 ], 00:32:31.650 "core_count": 1 00:32:31.650 } 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84310 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84310 ']' 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84310 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84310 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:32:31.650 killing process with pid 84310 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84310' 00:32:31.650 Received shutdown signal, test time was about 10.000000 seconds 00:32:31.650 00:32:31.650 Latency(us) 00:32:31.650 [2024-12-05T11:15:56.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.650 [2024-12-05T11:15:56.302Z] =================================================================================================================== 00:32:31.650 [2024-12-05T11:15:56.302Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84310 00:32:31.650 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84310 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84266 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84266 ']' 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84266 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84266 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84266' 00:32:32.219 killing process with pid 84266 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84266 00:32:32.219 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84266 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84470 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84470 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84470 ']' 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.479 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:32.479 [2024-12-05 11:15:56.966630] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:32.479 [2024-12-05 11:15:56.966735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.479 [2024-12-05 11:15:57.126338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.738 [2024-12-05 11:15:57.187309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.738 [2024-12-05 11:15:57.187377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.738 [2024-12-05 11:15:57.187392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.738 [2024-12-05 11:15:57.187405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.738 [2024-12-05 11:15:57.187416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.738 [2024-12-05 11:15:57.187801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.z78ooGyyDB 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.z78ooGyyDB 00:32:33.308 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:32:33.568 [2024-12-05 11:15:58.182701] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.568 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:32:34.136 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:32:34.136 [2024-12-05 11:15:58.754807] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:34.136 [2024-12-05 11:15:58.755026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.137 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:32:34.395 malloc0 00:32:34.395 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:34.964 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:32:34.964 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84580 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84580 /var/tmp/bdevperf.sock 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84580 ']' 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.532 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:35.532 [2024-12-05 11:16:00.010548] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:35.532 [2024-12-05 11:16:00.010643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84580 ] 00:32:35.532 [2024-12-05 11:16:00.153502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.821 [2024-12-05 11:16:00.209775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.439 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.439 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:36.439 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:32:36.697 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:32:36.697 [2024-12-05 11:16:01.313821] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:36.955 nvme0n1 00:32:36.955 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:36.955 Running I/O for 1 seconds... 00:32:38.334 5132.00 IOPS, 20.05 MiB/s 00:32:38.335 Latency(us) 00:32:38.335 [2024-12-05T11:16:02.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.335 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:38.335 Verification LBA range: start 0x0 length 0x2000 00:32:38.335 nvme0n1 : 1.01 5191.55 20.28 0.00 0.00 24470.49 4805.97 22344.66 00:32:38.335 [2024-12-05T11:16:02.987Z] =================================================================================================================== 00:32:38.335 [2024-12-05T11:16:02.987Z] Total : 5191.55 20.28 0.00 0.00 24470.49 4805.97 22344.66 00:32:38.335 { 00:32:38.335 "results": [ 00:32:38.335 { 00:32:38.335 "job": "nvme0n1", 00:32:38.335 "core_mask": "0x2", 00:32:38.335 "workload": "verify", 00:32:38.335 "status": "finished", 00:32:38.335 "verify_range": { 00:32:38.335 "start": 0, 00:32:38.335 "length": 8192 00:32:38.335 }, 00:32:38.335 "queue_depth": 128, 00:32:38.335 "io_size": 4096, 00:32:38.335 "runtime": 1.013185, 00:32:38.335 "iops": 5191.5494208856235, 00:32:38.335 "mibps": 20.279489925334467, 00:32:38.335 "io_failed": 0, 00:32:38.335 "io_timeout": 0, 00:32:38.335 "avg_latency_us": 24470.487395980446, 00:32:38.335 "min_latency_us": 4805.973333333333, 00:32:38.335 "max_latency_us": 22344.655238095238 00:32:38.335 } 00:32:38.335 ], 00:32:38.335 "core_count": 1 00:32:38.335 } 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84580 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84580 ']' 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84580 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84580 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:38.335 killing process with pid 84580 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84580' 00:32:38.335 Received shutdown signal, test time was about 1.000000 seconds 00:32:38.335 00:32:38.335 Latency(us) 00:32:38.335 [2024-12-05T11:16:02.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.335 [2024-12-05T11:16:02.987Z] =================================================================================================================== 00:32:38.335 [2024-12-05T11:16:02.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84580 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84580 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84470 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84470 ']' 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84470 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84470 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:38.335 killing process with pid 84470 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84470' 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84470 00:32:38.335 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84470 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84655 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84655 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84655 ']' 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.594 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:38.594 [2024-12-05 11:16:03.225317] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:38.594 [2024-12-05 11:16:03.225401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:38.876 [2024-12-05 11:16:03.366986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.876 [2024-12-05 11:16:03.432572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:38.876 [2024-12-05 11:16:03.432632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:38.876 [2024-12-05 11:16:03.432643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:38.876 [2024-12-05 11:16:03.432651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:38.876 [2024-12-05 11:16:03.432659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:38.876 [2024-12-05 11:16:03.433025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:39.811 [2024-12-05 11:16:04.254572] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.811 malloc0 00:32:39.811 [2024-12-05 11:16:04.289597] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:39.811 [2024-12-05 11:16:04.289843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84705 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84705 /var/tmp/bdevperf.sock 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84705 ']' 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:39.811 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:39.812 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.812 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:39.812 [2024-12-05 11:16:04.373900] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:39.812 [2024-12-05 11:16:04.373996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84705 ] 00:32:40.071 [2024-12-05 11:16:04.529811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.071 [2024-12-05 11:16:04.580749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.071 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.071 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:40.071 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.z78ooGyyDB 00:32:40.329 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:32:40.595 [2024-12-05 11:16:05.146847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:40.595 nvme0n1 00:32:40.595 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:40.853 Running I/O for 1 seconds... 00:32:41.787 5285.00 IOPS, 20.64 MiB/s 00:32:41.787 Latency(us) 00:32:41.787 [2024-12-05T11:16:06.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.787 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:41.787 Verification LBA range: start 0x0 length 0x2000 00:32:41.787 nvme0n1 : 1.01 5345.82 20.88 0.00 0.00 23769.51 4805.97 21595.67 00:32:41.787 [2024-12-05T11:16:06.439Z] =================================================================================================================== 00:32:41.787 [2024-12-05T11:16:06.439Z] Total : 5345.82 20.88 0.00 0.00 23769.51 4805.97 21595.67 00:32:41.787 { 00:32:41.787 "results": [ 00:32:41.787 { 00:32:41.787 "job": "nvme0n1", 00:32:41.787 "core_mask": "0x2", 00:32:41.787 "workload": "verify", 00:32:41.787 "status": "finished", 00:32:41.787 "verify_range": { 00:32:41.787 "start": 0, 00:32:41.787 "length": 8192 00:32:41.787 }, 00:32:41.787 "queue_depth": 128, 00:32:41.787 "io_size": 4096, 00:32:41.787 "runtime": 1.012567, 00:32:41.787 "iops": 5345.819091477403, 00:32:41.787 "mibps": 20.882105826083606, 00:32:41.787 "io_failed": 0, 00:32:41.787 "io_timeout": 0, 00:32:41.787 "avg_latency_us": 23769.51394825508, 00:32:41.787 "min_latency_us": 4805.973333333333, 00:32:41.787 "max_latency_us": 21595.67238095238 00:32:41.787 } 00:32:41.787 ], 00:32:41.787 "core_count": 1 00:32:41.787 } 00:32:41.787 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:32:41.787 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.787 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:42.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.046 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:32:42.046 "subsystems": [ 00:32:42.046 { 00:32:42.046 "subsystem": "keyring", 00:32:42.046 "config": [ 00:32:42.046 { 00:32:42.046 "method": "keyring_file_add_key", 00:32:42.046 "params": { 00:32:42.046 "name": "key0", 00:32:42.046 "path": "/tmp/tmp.z78ooGyyDB" 00:32:42.046 } 00:32:42.046 } 00:32:42.046 ] 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "subsystem": "iobuf", 00:32:42.046 "config": [ 00:32:42.046 { 00:32:42.046 "method": "iobuf_set_options", 00:32:42.046 "params": { 00:32:42.046 "enable_numa": false, 00:32:42.046 "large_bufsize": 135168, 00:32:42.046 "large_pool_count": 1024, 00:32:42.046 "small_bufsize": 8192, 00:32:42.046 "small_pool_count": 8192 00:32:42.046 } 00:32:42.046 } 00:32:42.046 ] 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "subsystem": "sock", 00:32:42.046 "config": [ 00:32:42.046 { 00:32:42.046 "method": "sock_set_default_impl", 00:32:42.046 "params": { 00:32:42.046 "impl_name": "posix" 00:32:42.046 } 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "method": "sock_impl_set_options", 00:32:42.046 "params": { 00:32:42.046 "enable_ktls": false, 00:32:42.046 "enable_placement_id": 0, 00:32:42.046 "enable_quickack": false, 00:32:42.046 "enable_recv_pipe": true, 00:32:42.046 "enable_zerocopy_send_client": false, 00:32:42.046 "enable_zerocopy_send_server": true, 00:32:42.046 "impl_name": "ssl", 00:32:42.046 "recv_buf_size": 4096, 00:32:42.046 "send_buf_size": 4096, 00:32:42.046 "tls_version": 0, 00:32:42.046 "zerocopy_threshold": 0 00:32:42.046 } 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "method": "sock_impl_set_options", 00:32:42.046 "params": { 00:32:42.046 "enable_ktls": false, 00:32:42.046 "enable_placement_id": 0, 00:32:42.046 "enable_quickack": false, 00:32:42.046 "enable_recv_pipe": true, 00:32:42.046 "enable_zerocopy_send_client": false, 00:32:42.046 "enable_zerocopy_send_server": true, 00:32:42.046 "impl_name": "posix", 00:32:42.046 "recv_buf_size": 2097152, 00:32:42.046 "send_buf_size": 2097152, 00:32:42.046 "tls_version": 0, 00:32:42.046 "zerocopy_threshold": 0 00:32:42.046 } 00:32:42.046 } 00:32:42.046 ] 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "subsystem": "vmd", 00:32:42.046 "config": [] 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "subsystem": "accel", 00:32:42.046 "config": [ 00:32:42.046 { 00:32:42.046 "method": "accel_set_options", 00:32:42.046 "params": { 00:32:42.046 "buf_count": 2048, 00:32:42.046 "large_cache_size": 16, 00:32:42.046 "sequence_count": 2048, 00:32:42.046 "small_cache_size": 128, 00:32:42.046 "task_count": 2048 00:32:42.046 } 00:32:42.046 } 00:32:42.046 ] 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "subsystem": "bdev", 00:32:42.046 "config": [ 00:32:42.046 { 00:32:42.046 "method": "bdev_set_options", 00:32:42.046 "params": { 00:32:42.046 "bdev_auto_examine": true, 00:32:42.046 "bdev_io_cache_size": 256, 00:32:42.046 "bdev_io_pool_size": 65535, 00:32:42.046 "iobuf_large_cache_size": 16, 00:32:42.046 "iobuf_small_cache_size": 128 00:32:42.046 } 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "method": "bdev_raid_set_options", 00:32:42.046 "params": { 00:32:42.046 "process_max_bandwidth_mb_sec": 0, 00:32:42.046 "process_window_size_kb": 1024 00:32:42.046 } 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "method": "bdev_iscsi_set_options", 00:32:42.046 "params": { 00:32:42.046 "timeout_sec": 30 00:32:42.046 } 00:32:42.046 }, 00:32:42.046 { 00:32:42.046 "method": "bdev_nvme_set_options", 00:32:42.046 "params": { 00:32:42.046 "action_on_timeout": "none", 00:32:42.046 "allow_accel_sequence": false, 00:32:42.046 "arbitration_burst": 0, 00:32:42.046 "bdev_retry_count": 3, 00:32:42.046 "ctrlr_loss_timeout_sec": 0, 00:32:42.046 "delay_cmd_submit": true, 00:32:42.046 "dhchap_dhgroups": [ 00:32:42.046 "null", 00:32:42.046 "ffdhe2048", 00:32:42.046 "ffdhe3072", 00:32:42.046 "ffdhe4096", 00:32:42.046 "ffdhe6144", 00:32:42.046 "ffdhe8192" 00:32:42.046 ], 00:32:42.046 "dhchap_digests": [ 00:32:42.046 "sha256", 00:32:42.046 "sha384", 00:32:42.046 "sha512" 00:32:42.046 ], 00:32:42.046 "disable_auto_failback": false, 00:32:42.046 "fast_io_fail_timeout_sec": 0, 00:32:42.046 "generate_uuids": false, 00:32:42.046 "high_priority_weight": 0, 00:32:42.046 "io_path_stat": false, 00:32:42.046 "io_queue_requests": 0, 00:32:42.046 "keep_alive_timeout_ms": 10000, 00:32:42.046 "low_priority_weight": 0, 00:32:42.046 "medium_priority_weight": 0, 00:32:42.046 "nvme_adminq_poll_period_us": 10000, 00:32:42.046 "nvme_error_stat": false, 00:32:42.046 "nvme_ioq_poll_period_us": 0, 00:32:42.046 "rdma_cm_event_timeout_ms": 0, 00:32:42.046 "rdma_max_cq_size": 0, 00:32:42.046 "rdma_srq_size": 0, 00:32:42.046 "reconnect_delay_sec": 0, 00:32:42.046 "timeout_admin_us": 0, 00:32:42.046 "timeout_us": 0, 00:32:42.046 "transport_ack_timeout": 0, 00:32:42.046 "transport_retry_count": 4, 00:32:42.046 "transport_tos": 0 00:32:42.046 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "bdev_nvme_set_hotplug", 00:32:42.047 "params": { 00:32:42.047 "enable": false, 00:32:42.047 "period_us": 100000 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "bdev_malloc_create", 00:32:42.047 "params": { 00:32:42.047 "block_size": 4096, 00:32:42.047 "dif_is_head_of_md": false, 00:32:42.047 "dif_pi_format": 0, 00:32:42.047 "dif_type": 0, 00:32:42.047 "md_size": 0, 00:32:42.047 "name": "malloc0", 00:32:42.047 "num_blocks": 8192, 00:32:42.047 "optimal_io_boundary": 0, 00:32:42.047 "physical_block_size": 4096, 00:32:42.047 "uuid": "af4e44e8-c054-4ed5-a7c1-6675c45ef1b0" 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "bdev_wait_for_examine" 00:32:42.047 } 00:32:42.047 ] 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "subsystem": "nbd", 00:32:42.047 "config": [] 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "subsystem": "scheduler", 00:32:42.047 "config": [ 00:32:42.047 { 00:32:42.047 "method": "framework_set_scheduler", 00:32:42.047 "params": { 00:32:42.047 "name": "static" 00:32:42.047 } 00:32:42.047 } 00:32:42.047 ] 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "subsystem": "nvmf", 00:32:42.047 "config": [ 00:32:42.047 { 00:32:42.047 "method": "nvmf_set_config", 00:32:42.047 "params": { 00:32:42.047 "admin_cmd_passthru": { 00:32:42.047 "identify_ctrlr": false 00:32:42.047 }, 00:32:42.047 "dhchap_dhgroups": [ 00:32:42.047 "null", 00:32:42.047 "ffdhe2048", 00:32:42.047 "ffdhe3072", 00:32:42.047 "ffdhe4096", 00:32:42.047 "ffdhe6144", 00:32:42.047 "ffdhe8192" 00:32:42.047 ], 00:32:42.047 "dhchap_digests": [ 00:32:42.047 "sha256", 00:32:42.047 "sha384", 00:32:42.047 "sha512" 00:32:42.047 ], 00:32:42.047 "discovery_filter": "match_any" 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "nvmf_set_max_subsystems", 00:32:42.047 "params": { 00:32:42.047 "max_subsystems": 1024 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "nvmf_set_crdt", 00:32:42.047 "params": { 00:32:42.047 "crdt1": 0, 00:32:42.047 "crdt2": 0, 00:32:42.047 "crdt3": 0 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "nvmf_create_transport", 00:32:42.047 "params": { 00:32:42.047 "abort_timeout_sec": 1, 00:32:42.047 "ack_timeout": 0, 00:32:42.047 "buf_cache_size": 4294967295, 00:32:42.047 "c2h_success": false, 00:32:42.047 "data_wr_pool_size": 0, 00:32:42.047 "dif_insert_or_strip": false, 00:32:42.047 "in_capsule_data_size": 4096, 00:32:42.047 "io_unit_size": 131072, 00:32:42.047 "max_aq_depth": 128, 00:32:42.047 "max_io_qpairs_per_ctrlr": 127, 00:32:42.047 "max_io_size": 131072, 00:32:42.047 "max_queue_depth": 128, 00:32:42.047 "num_shared_buffers": 511, 00:32:42.047 "sock_priority": 0, 00:32:42.047 "trtype": "TCP", 00:32:42.047 "zcopy": false 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "nvmf_create_subsystem", 00:32:42.047 "params": { 00:32:42.047 "allow_any_host": false, 00:32:42.047 "ana_reporting": false, 00:32:42.047 "max_cntlid": 65519, 00:32:42.047 "max_namespaces": 32, 00:32:42.047 "min_cntlid": 1, 00:32:42.047 "model_number": "SPDK bdev Controller", 00:32:42.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.047 "serial_number": "00000000000000000000" 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "nvmf_subsystem_add_host", 00:32:42.047 "params": { 00:32:42.047 "host": "nqn.2016-06.io.spdk:host1", 00:32:42.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.047 "psk": "key0" 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "nvmf_subsystem_add_ns", 00:32:42.047 "params": { 00:32:42.047 "namespace": { 00:32:42.047 "bdev_name": "malloc0", 00:32:42.047 "nguid": "AF4E44E8C0544ED5A7C16675C45EF1B0", 00:32:42.047 "no_auto_visible": false, 00:32:42.047 "nsid": 1, 00:32:42.047 "uuid": "af4e44e8-c054-4ed5-a7c1-6675c45ef1b0" 00:32:42.047 }, 00:32:42.047 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:42.047 } 00:32:42.047 }, 00:32:42.047 { 00:32:42.047 "method": "nvmf_subsystem_add_listener", 00:32:42.047 "params": { 00:32:42.047 "listen_address": { 00:32:42.047 "adrfam": "IPv4", 00:32:42.047 "traddr": "10.0.0.2", 00:32:42.047 "trsvcid": "4420", 00:32:42.047 "trtype": "TCP" 00:32:42.047 }, 00:32:42.047 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.047 "secure_channel": false, 00:32:42.047 "sock_impl": "ssl" 00:32:42.047 } 00:32:42.047 } 00:32:42.047 ] 00:32:42.047 } 00:32:42.047 ] 00:32:42.047 }' 00:32:42.047 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:32:42.305 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:32:42.305 "subsystems": [ 00:32:42.305 { 00:32:42.305 "subsystem": "keyring", 00:32:42.305 "config": [ 00:32:42.305 { 00:32:42.305 "method": "keyring_file_add_key", 00:32:42.305 "params": { 00:32:42.305 "name": "key0", 00:32:42.305 "path": "/tmp/tmp.z78ooGyyDB" 00:32:42.305 } 00:32:42.305 } 00:32:42.305 ] 00:32:42.305 }, 00:32:42.305 { 00:32:42.305 "subsystem": "iobuf", 00:32:42.305 "config": [ 00:32:42.305 { 00:32:42.305 "method": "iobuf_set_options", 00:32:42.305 "params": { 00:32:42.305 "enable_numa": false, 00:32:42.305 "large_bufsize": 135168, 00:32:42.305 "large_pool_count": 1024, 00:32:42.305 "small_bufsize": 8192, 00:32:42.305 "small_pool_count": 8192 00:32:42.305 } 00:32:42.305 } 00:32:42.305 ] 00:32:42.305 }, 00:32:42.305 { 00:32:42.305 "subsystem": "sock", 00:32:42.305 "config": [ 00:32:42.305 { 00:32:42.305 "method": "sock_set_default_impl", 00:32:42.305 "params": { 00:32:42.305 "impl_name": "posix" 00:32:42.305 } 00:32:42.305 }, 00:32:42.305 { 00:32:42.305 "method": "sock_impl_set_options", 00:32:42.305 "params": { 00:32:42.305 "enable_ktls": false, 00:32:42.305 "enable_placement_id": 0, 00:32:42.305 "enable_quickack": false, 00:32:42.305 "enable_recv_pipe": true, 00:32:42.305 "enable_zerocopy_send_client": false, 00:32:42.305 "enable_zerocopy_send_server": true, 00:32:42.305 "impl_name": "ssl", 00:32:42.305 "recv_buf_size": 4096, 00:32:42.305 "send_buf_size": 4096, 00:32:42.305 "tls_version": 0, 00:32:42.305 "zerocopy_threshold": 0 00:32:42.305 } 00:32:42.305 }, 00:32:42.305 { 00:32:42.305 "method": "sock_impl_set_options", 00:32:42.305 "params": { 00:32:42.305 "enable_ktls": false, 00:32:42.305 "enable_placement_id": 0, 00:32:42.305 "enable_quickack": false, 00:32:42.305 "enable_recv_pipe": true, 00:32:42.305 "enable_zerocopy_send_client": false, 00:32:42.305 "enable_zerocopy_send_server": true, 00:32:42.305 "impl_name": "posix", 00:32:42.305 "recv_buf_size": 2097152, 00:32:42.305 "send_buf_size": 2097152, 00:32:42.305 "tls_version": 0, 00:32:42.305 "zerocopy_threshold": 0 00:32:42.305 } 00:32:42.305 } 00:32:42.305 ] 00:32:42.305 }, 00:32:42.305 { 00:32:42.305 "subsystem": "vmd", 00:32:42.305 "config": [] 00:32:42.305 }, 00:32:42.305 { 00:32:42.305 "subsystem": "accel", 00:32:42.305 "config": [ 00:32:42.305 { 00:32:42.305 "method": "accel_set_options", 00:32:42.305 "params": { 00:32:42.305 "buf_count": 2048, 00:32:42.305 "large_cache_size": 16, 00:32:42.305 "sequence_count": 2048, 00:32:42.305 "small_cache_size": 128, 00:32:42.305 "task_count": 2048 00:32:42.305 } 00:32:42.305 } 00:32:42.305 ] 00:32:42.305 }, 00:32:42.305 { 00:32:42.305 "subsystem": "bdev", 00:32:42.305 "config": [ 00:32:42.306 { 00:32:42.306 "method": "bdev_set_options", 00:32:42.306 "params": { 00:32:42.306 "bdev_auto_examine": true, 00:32:42.306 "bdev_io_cache_size": 256, 00:32:42.306 "bdev_io_pool_size": 65535, 00:32:42.306 "iobuf_large_cache_size": 16, 00:32:42.306 "iobuf_small_cache_size": 128 00:32:42.306 } 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "method": "bdev_raid_set_options", 00:32:42.306 "params": { 00:32:42.306 "process_max_bandwidth_mb_sec": 0, 00:32:42.306 "process_window_size_kb": 1024 00:32:42.306 } 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "method": "bdev_iscsi_set_options", 00:32:42.306 "params": { 00:32:42.306 "timeout_sec": 30 00:32:42.306 } 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "method": "bdev_nvme_set_options", 00:32:42.306 "params": { 00:32:42.306 "action_on_timeout": "none", 00:32:42.306 "allow_accel_sequence": false, 00:32:42.306 "arbitration_burst": 0, 00:32:42.306 "bdev_retry_count": 3, 00:32:42.306 "ctrlr_loss_timeout_sec": 0, 00:32:42.306 "delay_cmd_submit": true, 00:32:42.306 "dhchap_dhgroups": [ 00:32:42.306 "null", 00:32:42.306 "ffdhe2048", 00:32:42.306 "ffdhe3072", 00:32:42.306 "ffdhe4096", 00:32:42.306 "ffdhe6144", 00:32:42.306 "ffdhe8192" 00:32:42.306 ], 00:32:42.306 "dhchap_digests": [ 00:32:42.306 "sha256", 00:32:42.306 "sha384", 00:32:42.306 "sha512" 00:32:42.306 ], 00:32:42.306 "disable_auto_failback": false, 00:32:42.306 "fast_io_fail_timeout_sec": 0, 00:32:42.306 "generate_uuids": false, 00:32:42.306 "high_priority_weight": 0, 00:32:42.306 "io_path_stat": false, 00:32:42.306 "io_queue_requests": 512, 00:32:42.306 "keep_alive_timeout_ms": 10000, 00:32:42.306 "low_priority_weight": 0, 00:32:42.306 "medium_priority_weight": 0, 00:32:42.306 "nvme_adminq_poll_period_us": 10000, 00:32:42.306 "nvme_error_stat": false, 00:32:42.306 "nvme_ioq_poll_period_us": 0, 00:32:42.306 "rdma_cm_event_timeout_ms": 0, 00:32:42.306 "rdma_max_cq_size": 0, 00:32:42.306 "rdma_srq_size": 0, 00:32:42.306 "reconnect_delay_sec": 0, 00:32:42.306 "timeout_admin_us": 0, 00:32:42.306 "timeout_us": 0, 00:32:42.306 "transport_ack_timeout": 0, 00:32:42.306 "transport_retry_count": 4, 00:32:42.306 "transport_tos": 0 00:32:42.306 } 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "method": "bdev_nvme_attach_controller", 00:32:42.306 "params": { 00:32:42.306 "adrfam": "IPv4", 00:32:42.306 "ctrlr_loss_timeout_sec": 0, 00:32:42.306 "ddgst": false, 00:32:42.306 "fast_io_fail_timeout_sec": 0, 00:32:42.306 "hdgst": false, 00:32:42.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:42.306 "multipath": "multipath", 00:32:42.306 "name": "nvme0", 00:32:42.306 "prchk_guard": false, 00:32:42.306 "prchk_reftag": false, 00:32:42.306 "psk": "key0", 00:32:42.306 "reconnect_delay_sec": 0, 00:32:42.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.306 "traddr": "10.0.0.2", 00:32:42.306 "trsvcid": "4420", 00:32:42.306 "trtype": "TCP" 00:32:42.306 } 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "method": "bdev_nvme_set_hotplug", 00:32:42.306 "params": { 00:32:42.306 "enable": false, 00:32:42.306 "period_us": 100000 00:32:42.306 } 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "method": "bdev_enable_histogram", 00:32:42.306 "params": { 00:32:42.306 "enable": true, 00:32:42.306 "name": "nvme0n1" 00:32:42.306 } 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "method": "bdev_wait_for_examine" 00:32:42.306 } 00:32:42.306 ] 00:32:42.306 }, 00:32:42.306 { 00:32:42.306 "subsystem": "nbd", 00:32:42.306 "config": [] 00:32:42.306 } 00:32:42.306 ] 00:32:42.306 }' 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84705 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84705 ']' 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84705 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84705 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:42.306 killing process with pid 84705 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84705' 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84705 00:32:42.306 Received shutdown signal, test time was about 1.000000 seconds 00:32:42.306 00:32:42.306 Latency(us) 00:32:42.306 [2024-12-05T11:16:06.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.306 [2024-12-05T11:16:06.958Z] =================================================================================================================== 00:32:42.306 [2024-12-05T11:16:06.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:42.306 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84705 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84655 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84655 ']' 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84655 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84655 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.564 killing process with pid 84655 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84655' 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84655 00:32:42.564 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84655 00:32:42.822 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:32:42.822 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:42.822 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.822 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:32:42.822 "subsystems": [ 00:32:42.822 { 00:32:42.822 "subsystem": "keyring", 00:32:42.822 "config": [ 00:32:42.822 { 00:32:42.822 "method": "keyring_file_add_key", 00:32:42.822 "params": { 00:32:42.822 "name": "key0", 00:32:42.822 "path": "/tmp/tmp.z78ooGyyDB" 00:32:42.822 } 00:32:42.822 } 00:32:42.822 ] 00:32:42.822 }, 00:32:42.822 { 00:32:42.822 "subsystem": "iobuf", 00:32:42.822 "config": [ 00:32:42.822 { 00:32:42.822 "method": "iobuf_set_options", 00:32:42.822 "params": { 00:32:42.822 "enable_numa": false, 00:32:42.822 "large_bufsize": 135168, 00:32:42.822 "large_pool_count": 1024, 00:32:42.822 "small_bufsize": 8192, 00:32:42.822 "small_pool_count": 8192 00:32:42.822 } 00:32:42.822 } 00:32:42.822 ] 00:32:42.822 }, 00:32:42.822 { 00:32:42.822 "subsystem": "sock", 00:32:42.822 "config": [ 00:32:42.822 { 00:32:42.822 "method": "sock_set_default_impl", 00:32:42.822 "params": { 00:32:42.822 "impl_name": "posix" 00:32:42.822 } 00:32:42.822 }, 00:32:42.822 { 00:32:42.822 "method": "sock_impl_set_options", 00:32:42.822 "params": { 00:32:42.822 "enable_ktls": false, 00:32:42.822 "enable_placement_id": 0, 00:32:42.822 "enable_quickack": false, 00:32:42.822 "enable_recv_pipe": true, 00:32:42.822 "enable_zerocopy_send_client": false, 00:32:42.822 "enable_zerocopy_send_server": true, 00:32:42.822 "impl_name": "ssl", 00:32:42.822 "recv_buf_size": 4096, 00:32:42.822 "send_buf_size": 4096, 00:32:42.823 "tls_version": 0, 00:32:42.823 "zerocopy_threshold": 0 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "sock_impl_set_options", 00:32:42.823 "params": { 00:32:42.823 "enable_ktls": false, 00:32:42.823 "enable_placement_id": 0, 00:32:42.823 "enable_quickack": false, 00:32:42.823 "enable_recv_pipe": true, 00:32:42.823 "enable_zerocopy_send_client": false, 00:32:42.823 "enable_zerocopy_send_server": true, 00:32:42.823 "impl_name": "posix", 00:32:42.823 "recv_buf_size": 2097152, 00:32:42.823 "send_buf_size": 2097152, 00:32:42.823 "tls_version": 0, 00:32:42.823 "zerocopy_threshold": 0 00:32:42.823 } 00:32:42.823 } 00:32:42.823 ] 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "subsystem": "vmd", 00:32:42.823 "config": [] 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "subsystem": "accel", 00:32:42.823 "config": [ 00:32:42.823 { 00:32:42.823 "method": "accel_set_options", 00:32:42.823 "params": { 00:32:42.823 "buf_count": 2048, 00:32:42.823 "large_cache_size": 16, 00:32:42.823 "sequence_count": 2048, 00:32:42.823 "small_cache_size": 128, 00:32:42.823 "task_count": 2048 00:32:42.823 } 00:32:42.823 } 00:32:42.823 ] 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "subsystem": "bdev", 00:32:42.823 "config": [ 00:32:42.823 { 00:32:42.823 "method": "bdev_set_options", 00:32:42.823 "params": { 00:32:42.823 "bdev_auto_examine": true, 00:32:42.823 "bdev_io_cache_size": 256, 00:32:42.823 "bdev_io_pool_size": 65535, 00:32:42.823 "iobuf_large_cache_size": 16, 00:32:42.823 "iobuf_small_cache_size": 128 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "bdev_raid_set_options", 00:32:42.823 "params": { 00:32:42.823 "process_max_bandwidth_mb_sec": 0, 00:32:42.823 "process_window_size_kb": 1024 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "bdev_iscsi_set_options", 00:32:42.823 "params": { 00:32:42.823 "timeout_sec": 30 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "bdev_nvme_set_options", 00:32:42.823 "params": { 00:32:42.823 "action_on_timeout": "none", 00:32:42.823 "allow_accel_sequence": false, 00:32:42.823 "arbitration_burst": 0, 00:32:42.823 "bdev_retry_count": 3, 00:32:42.823 "ctrlr_loss_timeout_sec": 0, 00:32:42.823 "delay_cmd_submit": true, 00:32:42.823 "dhchap_dhgroups": [ 00:32:42.823 "null", 00:32:42.823 "ffdhe2048", 00:32:42.823 "ffdhe3072", 00:32:42.823 "ffdhe4096", 00:32:42.823 "ffdhe6144", 00:32:42.823 "ffdhe8192" 00:32:42.823 ], 00:32:42.823 "dhchap_digests": [ 00:32:42.823 "sha256", 00:32:42.823 "sha384", 00:32:42.823 "sha512" 00:32:42.823 ], 00:32:42.823 "disable_auto_failback": false, 00:32:42.823 "fast_io_fail_timeout_sec": 0, 00:32:42.823 "generate_uuids": false, 00:32:42.823 "high_priority_weight": 0, 00:32:42.823 "io_path_stat": false, 00:32:42.823 "io_queue_requests": 0, 00:32:42.823 "keep_alive_timeout_ms": 10000, 00:32:42.823 "low_priority_weight": 0, 00:32:42.823 "medium_priority_weight": 0, 00:32:42.823 "nvme_adminq_poll_period_us": 10000, 00:32:42.823 "nvme_error_stat": false, 00:32:42.823 "nvme_ioq_poll_period_us": 0, 00:32:42.823 "rdma_cm_event_timeout_ms": 0, 00:32:42.823 "rdma_max_cq_size": 0, 00:32:42.823 "rdma_srq_size": 0, 00:32:42.823 "reconnect_delay_sec": 0, 00:32:42.823 "timeout_admin_us": 0, 00:32:42.823 "timeout_us": 0, 00:32:42.823 "transport_ack_timeout": 0, 00:32:42.823 "transport_retry_count": 4, 00:32:42.823 "transport_tos": 0 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "bdev_nvme_set_hotplug", 00:32:42.823 "params": { 00:32:42.823 "enable": false, 00:32:42.823 "period_us": 100000 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "bdev_malloc_create", 00:32:42.823 "params": { 00:32:42.823 "block_size": 4096, 00:32:42.823 "dif_is_head_of_md": false, 00:32:42.823 "dif_pi_format": 0, 00:32:42.823 "dif_type": 0, 00:32:42.823 "md_size": 0, 00:32:42.823 "name": "malloc0", 00:32:42.823 "num_blocks": 8192, 00:32:42.823 "optimal_io_boundary": 0, 00:32:42.823 "physical_block_size": 4096, 00:32:42.823 "uuid": "af4e44e8-c054-4ed5-a7c1-6675c45ef1b0" 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "bdev_wait_for_examine" 00:32:42.823 } 00:32:42.823 ] 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "subsystem": "nbd", 00:32:42.823 "config": [] 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "subsystem": "scheduler", 00:32:42.823 "config": [ 00:32:42.823 { 00:32:42.823 "method": "framework_set_scheduler", 00:32:42.823 "params": { 00:32:42.823 "name": "static" 00:32:42.823 } 00:32:42.823 } 00:32:42.823 ] 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "subsystem": "nvmf", 00:32:42.823 "config": [ 00:32:42.823 { 00:32:42.823 "method": "nvmf_set_config", 00:32:42.823 "params": { 00:32:42.823 "admin_cmd_passthru": { 00:32:42.823 "identify_ctrlr": false 00:32:42.823 }, 00:32:42.823 "dhchap_dhgroups": [ 00:32:42.823 "null", 00:32:42.823 "ffdhe2048", 00:32:42.823 "ffdhe3072", 00:32:42.823 "ffdhe4096", 00:32:42.823 "ffdhe6144", 00:32:42.823 "ffdhe8192" 00:32:42.823 ], 00:32:42.823 "dhchap_digests": [ 00:32:42.823 "sha256", 00:32:42.823 "sha384", 00:32:42.823 "sha512" 00:32:42.823 ], 00:32:42.823 "discovery_filter": "match_any" 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "nvmf_set_max_subsystems", 00:32:42.823 "params": { 00:32:42.823 "max_subsystems": 1024 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "nvmf_set_crdt", 00:32:42.823 "params": { 00:32:42.823 "crdt1": 0, 00:32:42.823 "crdt2": 0, 00:32:42.823 "crdt3": 0 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "nvmf_create_transport", 00:32:42.823 "params": { 00:32:42.823 "abort_timeout_sec": 1, 00:32:42.823 "ack_timeout": 0, 00:32:42.823 "buf_cache_size": 4294967295, 00:32:42.823 "c2h_success": false, 00:32:42.823 "data_wr_pool_size": 0, 00:32:42.823 "dif_insert_or_strip": false, 00:32:42.823 "in_capsule_data_size": 4096, 00:32:42.823 "io_unit_size": 131072, 00:32:42.823 "max_aq_depth": 128, 00:32:42.823 "max_io_qpairs_per_ctrlr": 127, 00:32:42.823 "max_io_size": 131072, 00:32:42.823 "max_queue_depth": 128, 00:32:42.823 "num_shared_buffers": 511, 00:32:42.823 "sock_priority": 0, 00:32:42.823 "trtype": "TCP", 00:32:42.823 "zcopy": false 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "nvmf_create_subsystem", 00:32:42.823 "params": { 00:32:42.823 "allow_any_host": false, 00:32:42.823 "ana_reporting": false, 00:32:42.823 "max_cntlid": 65519, 00:32:42.823 "max_namespaces": 32, 00:32:42.823 "min_cntlid": 1, 00:32:42.823 "model_number": "SPDK bdev Controller", 00:32:42.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.823 "serial_number": "00000000000000000000" 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "nvmf_subsystem_add_host", 00:32:42.823 "params": { 00:32:42.823 "host": "nqn.2016-06.io.spdk:host1", 00:32:42.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.823 "psk": "key0" 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "nvmf_subsystem_add_ns", 00:32:42.823 "params": { 00:32:42.823 "namespace": { 00:32:42.823 "bdev_name": "malloc0", 00:32:42.823 "nguid": "AF4E44E8C0544ED5A7C16675C45EF1B0", 00:32:42.823 "no_auto_visible": false, 00:32:42.823 "nsid": 1, 00:32:42.823 "uuid": "af4e44e8-c054-4ed5-a7c1-6675c45ef1b0" 00:32:42.823 }, 00:32:42.823 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:32:42.823 } 00:32:42.823 }, 00:32:42.823 { 00:32:42.823 "method": "nvmf_subsystem_add_listener", 00:32:42.823 "params": { 00:32:42.823 "listen_address": { 00:32:42.823 "adrfam": "IPv4", 00:32:42.823 "traddr": "10.0.0.2", 00:32:42.823 "trsvcid": "4420", 00:32:42.823 "trtype": "TCP" 00:32:42.823 }, 00:32:42.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:42.823 "secure_channel": false, 00:32:42.823 "sock_impl": "ssl" 00:32:42.823 } 00:32:42.823 } 00:32:42.823 ] 00:32:42.823 } 00:32:42.823 ] 00:32:42.823 }' 00:32:42.823 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:42.823 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84786 00:32:42.823 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84786 00:32:42.823 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84786 ']' 00:32:42.823 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.823 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:32:42.823 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.824 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.824 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.824 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:42.824 [2024-12-05 11:16:07.463491] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:42.824 [2024-12-05 11:16:07.463566] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.082 [2024-12-05 11:16:07.598754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.082 [2024-12-05 11:16:07.660490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.082 [2024-12-05 11:16:07.660537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.082 [2024-12-05 11:16:07.660547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.082 [2024-12-05 11:16:07.660555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.082 [2024-12-05 11:16:07.660562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.082 [2024-12-05 11:16:07.660978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.339 [2024-12-05 11:16:07.942083] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.339 [2024-12-05 11:16:07.974035] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:43.339 [2024-12-05 11:16:07.974274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84830 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84830 /var/tmp/bdevperf.sock 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84830 ']' 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:32:43.920 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:32:43.920 "subsystems": [ 00:32:43.920 { 00:32:43.920 "subsystem": "keyring", 00:32:43.920 "config": [ 00:32:43.921 { 00:32:43.921 "method": "keyring_file_add_key", 00:32:43.921 "params": { 00:32:43.921 "name": "key0", 00:32:43.921 "path": "/tmp/tmp.z78ooGyyDB" 00:32:43.921 } 00:32:43.921 } 00:32:43.921 ] 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "subsystem": "iobuf", 00:32:43.921 "config": [ 00:32:43.921 { 00:32:43.921 "method": "iobuf_set_options", 00:32:43.921 "params": { 00:32:43.921 "enable_numa": false, 00:32:43.921 "large_bufsize": 135168, 00:32:43.921 "large_pool_count": 1024, 00:32:43.921 "small_bufsize": 8192, 00:32:43.921 "small_pool_count": 8192 00:32:43.921 } 00:32:43.921 } 00:32:43.921 ] 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "subsystem": "sock", 00:32:43.921 "config": [ 00:32:43.921 { 00:32:43.921 "method": "sock_set_default_impl", 00:32:43.921 "params": { 00:32:43.921 "impl_name": "posix" 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "sock_impl_set_options", 00:32:43.921 "params": { 00:32:43.921 "enable_ktls": false, 00:32:43.921 "enable_placement_id": 0, 00:32:43.921 "enable_quickack": false, 00:32:43.921 "enable_recv_pipe": true, 00:32:43.921 "enable_zerocopy_send_client": false, 00:32:43.921 "enable_zerocopy_send_server": true, 00:32:43.921 "impl_name": "ssl", 00:32:43.921 "recv_buf_size": 4096, 00:32:43.921 "send_buf_size": 4096, 00:32:43.921 "tls_version": 0, 00:32:43.921 "zerocopy_threshold": 0 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "sock_impl_set_options", 00:32:43.921 "params": { 00:32:43.921 "enable_ktls": false, 00:32:43.921 "enable_placement_id": 0, 00:32:43.921 "enable_quickack": false, 00:32:43.921 "enable_recv_pipe": true, 00:32:43.921 "enable_zerocopy_send_client": false, 00:32:43.921 "enable_zerocopy_send_server": true, 00:32:43.921 "impl_name": "posix", 00:32:43.921 "recv_buf_size": 2097152, 00:32:43.921 "send_buf_size": 2097152, 00:32:43.921 "tls_version": 0, 00:32:43.921 "zerocopy_threshold": 0 00:32:43.921 } 00:32:43.921 } 00:32:43.921 ] 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "subsystem": "vmd", 00:32:43.921 "config": [] 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "subsystem": "accel", 00:32:43.921 "config": [ 00:32:43.921 { 00:32:43.921 "method": "accel_set_options", 00:32:43.921 "params": { 00:32:43.921 "buf_count": 2048, 00:32:43.921 "large_cache_size": 16, 00:32:43.921 "sequence_count": 2048, 00:32:43.921 "small_cache_size": 128, 00:32:43.921 "task_count": 2048 00:32:43.921 } 00:32:43.921 } 00:32:43.921 ] 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "subsystem": "bdev", 00:32:43.921 "config": [ 00:32:43.921 { 00:32:43.921 "method": "bdev_set_options", 00:32:43.921 "params": { 00:32:43.921 "bdev_auto_examine": true, 00:32:43.921 "bdev_io_cache_size": 256, 00:32:43.921 "bdev_io_pool_size": 65535, 00:32:43.921 "iobuf_large_cache_size": 16, 00:32:43.921 "iobuf_small_cache_size": 128 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "bdev_raid_set_options", 00:32:43.921 "params": { 00:32:43.921 "process_max_bandwidth_mb_sec": 0, 00:32:43.921 "process_window_size_kb": 1024 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "bdev_iscsi_set_options", 00:32:43.921 "params": { 00:32:43.921 "timeout_sec": 30 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "bdev_nvme_set_options", 00:32:43.921 "params": { 00:32:43.921 "action_on_timeout": "none", 00:32:43.921 "allow_accel_sequence": false, 00:32:43.921 "arbitration_burst": 0, 00:32:43.921 "bdev_retry_count": 3, 00:32:43.921 "ctrlr_loss_timeout_sec": 0, 00:32:43.921 "delay_cmd_submit": true, 00:32:43.921 "dhchap_dhgroups": [ 00:32:43.921 "null", 00:32:43.921 "ffdhe2048", 00:32:43.921 "ffdhe3072", 00:32:43.921 "ffdhe4096", 00:32:43.921 "ffdhe6144", 00:32:43.921 "ffdhe8192" 00:32:43.921 ], 00:32:43.921 "dhchap_digests": [ 00:32:43.921 "sha256", 00:32:43.921 "sha384", 00:32:43.921 "sha512" 00:32:43.921 ], 00:32:43.921 "disable_auto_failback": false, 00:32:43.921 "fast_io_fail_timeout_sec": 0, 00:32:43.921 "generate_uuids": false, 00:32:43.921 "high_priority_weight": 0, 00:32:43.921 "io_path_stat": false, 00:32:43.921 "io_queue_requests": 512, 00:32:43.921 "keep_alive_timeout_ms": 10000, 00:32:43.921 "low_priority_weight": 0, 00:32:43.921 "medium_priority_weight": 0, 00:32:43.921 "nvme_adminq_poll_period_us": 10000, 00:32:43.921 "nvme_error_stat": false, 00:32:43.921 "nvme_ioq_poll_period_us": 0, 00:32:43.921 "rdma_cm_event_timeout_ms": 0, 00:32:43.921 "rdma_max_cq_size": 0, 00:32:43.921 "rdma_srq_size": 0, 00:32:43.921 "reconnect_delay_sec": 0, 00:32:43.921 "timeout_admin_us": 0, 00:32:43.921 "timeout_us": 0, 00:32:43.921 "transport_ack_timeout": 0, 00:32:43.921 "transport_retry_count": 4, 00:32:43.921 "transport_tos": 0 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "bdev_nvme_attach_controller", 00:32:43.921 "params": { 00:32:43.921 "adrfam": "IPv4", 00:32:43.921 "ctrlr_loss_timeout_sec": 0, 00:32:43.921 "ddgst": false, 00:32:43.921 "fast_io_fail_timeout_sec": 0, 00:32:43.921 "hdgst": false, 00:32:43.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.921 "multipath": "multipath", 00:32:43.921 "name": "nvme0", 00:32:43.921 "prchk_guard": false, 00:32:43.921 "prchk_reftag": false, 00:32:43.921 "psk": "key0", 00:32:43.921 "reconnect_delay_sec": 0, 00:32:43.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.921 "traddr": "10.0.0.2", 00:32:43.921 "trsvcid": "4420", 00:32:43.921 "trtype": "TCP" 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "bdev_nvme_set_hotplug", 00:32:43.921 "params": { 00:32:43.921 "enable": false, 00:32:43.921 "period_us": 100000 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "bdev_enable_histogram", 00:32:43.921 "params": { 00:32:43.921 "enable": true, 00:32:43.921 "name": "nvme0n1" 00:32:43.921 } 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "method": "bdev_wait_for_examine" 00:32:43.921 } 00:32:43.921 ] 00:32:43.921 }, 00:32:43.921 { 00:32:43.921 "subsystem": "nbd", 00:32:43.921 "config": [] 00:32:43.921 } 00:32:43.921 ] 00:32:43.921 }' 00:32:43.921 [2024-12-05 11:16:08.536066] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:43.921 [2024-12-05 11:16:08.536169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84830 ] 00:32:44.178 [2024-12-05 11:16:08.698984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.178 [2024-12-05 11:16:08.755001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.435 [2024-12-05 11:16:08.921830] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:45.003 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.003 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:32:45.003 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:45.003 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:32:45.260 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.260 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:45.517 Running I/O for 1 seconds... 00:32:46.453 5272.00 IOPS, 20.59 MiB/s 00:32:46.453 Latency(us) 00:32:46.453 [2024-12-05T11:16:11.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.453 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:46.453 Verification LBA range: start 0x0 length 0x2000 00:32:46.453 nvme0n1 : 1.01 5328.95 20.82 0.00 0.00 23839.24 4681.14 19099.06 00:32:46.453 [2024-12-05T11:16:11.105Z] =================================================================================================================== 00:32:46.453 [2024-12-05T11:16:11.105Z] Total : 5328.95 20.82 0.00 0.00 23839.24 4681.14 19099.06 00:32:46.453 { 00:32:46.453 "results": [ 00:32:46.453 { 00:32:46.453 "job": "nvme0n1", 00:32:46.453 "core_mask": "0x2", 00:32:46.453 "workload": "verify", 00:32:46.453 "status": "finished", 00:32:46.453 "verify_range": { 00:32:46.453 "start": 0, 00:32:46.453 "length": 8192 00:32:46.453 }, 00:32:46.453 "queue_depth": 128, 00:32:46.453 "io_size": 4096, 00:32:46.453 "runtime": 1.013332, 00:32:46.453 "iops": 5328.954380203132, 00:32:46.453 "mibps": 20.816228047668485, 00:32:46.453 "io_failed": 0, 00:32:46.453 "io_timeout": 0, 00:32:46.453 "avg_latency_us": 23839.240126984125, 00:32:46.453 "min_latency_us": 4681.142857142857, 00:32:46.453 "max_latency_us": 19099.062857142857 00:32:46.453 } 00:32:46.453 ], 00:32:46.453 "core_count": 1 00:32:46.453 } 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:46.453 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:46.453 nvmf_trace.0 00:32:46.453 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:32:46.453 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84830 00:32:46.453 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84830 ']' 00:32:46.453 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84830 00:32:46.453 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:46.453 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.453 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84830 00:32:46.711 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:46.711 killing process with pid 84830 00:32:46.711 Received shutdown signal, test time was about 1.000000 seconds 00:32:46.711 00:32:46.711 Latency(us) 00:32:46.711 [2024-12-05T11:16:11.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.711 [2024-12-05T11:16:11.363Z] =================================================================================================================== 00:32:46.711 [2024-12-05T11:16:11.363Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.711 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:46.711 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84830' 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84830 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84830 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:46.712 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:46.970 rmmod nvme_tcp 00:32:46.970 rmmod nvme_fabrics 00:32:46.970 rmmod nvme_keyring 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 84786 ']' 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 84786 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84786 ']' 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84786 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84786 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.970 killing process with pid 84786 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84786' 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84786 00:32:46.970 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84786 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:32:47.228 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Br65ye2EuR /tmp/tmp.37RfhDCClp /tmp/tmp.z78ooGyyDB 00:32:47.486 00:32:47.486 real 1m26.573s 00:32:47.486 user 2m15.463s 00:32:47.486 sys 0m30.788s 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:32:47.486 ************************************ 00:32:47.486 END TEST nvmf_tls 00:32:47.486 ************************************ 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:47.486 ************************************ 00:32:47.486 START TEST nvmf_fips 00:32:47.486 ************************************ 00:32:47.486 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:32:47.486 * Looking for test storage... 00:32:47.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:32:47.486 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:47.486 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:32:47.486 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.746 --rc genhtml_branch_coverage=1 00:32:47.746 --rc genhtml_function_coverage=1 00:32:47.746 --rc genhtml_legend=1 00:32:47.746 --rc geninfo_all_blocks=1 00:32:47.746 --rc geninfo_unexecuted_blocks=1 00:32:47.746 00:32:47.746 ' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.746 --rc genhtml_branch_coverage=1 00:32:47.746 --rc genhtml_function_coverage=1 00:32:47.746 --rc genhtml_legend=1 00:32:47.746 --rc geninfo_all_blocks=1 00:32:47.746 --rc geninfo_unexecuted_blocks=1 00:32:47.746 00:32:47.746 ' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.746 --rc genhtml_branch_coverage=1 00:32:47.746 --rc genhtml_function_coverage=1 00:32:47.746 --rc genhtml_legend=1 00:32:47.746 --rc geninfo_all_blocks=1 00:32:47.746 --rc geninfo_unexecuted_blocks=1 00:32:47.746 00:32:47.746 ' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:47.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.746 --rc genhtml_branch_coverage=1 00:32:47.746 --rc genhtml_function_coverage=1 00:32:47.746 --rc genhtml_legend=1 00:32:47.746 --rc geninfo_all_blocks=1 00:32:47.746 --rc geninfo_unexecuted_blocks=1 00:32:47.746 00:32:47.746 ' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.746 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:47.747 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:32:47.747 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:32:47.748 Error setting digest 00:32:47.748 4012BFA95D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:32:47.748 4012BFA95D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@280 -- # nvmf_veth_init 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@223 -- # create_target_ns 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:47.748 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # create_main_bridge 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@105 -- # delete_main_bridge 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0 up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:32:48.006 10.0.0.1 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:48.006 10.0.0.2 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target0_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator1 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target1 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1 up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target1_br 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target1 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:32:48.006 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772163 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:32:48.265 10.0.0.3 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772164 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:32:48.265 10.0.0.4 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target1_br 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 2 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:48.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:32:48.265 00:32:48.265 --- 10.0.0.1 ping statistics --- 00:32:48.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.265 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.265 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:48.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:32:48.266 00:32:48.266 --- 10.0.0.2 ping statistics --- 00:32:48.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.266 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:32:48.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:48.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:32:48.266 00:32:48.266 --- 10.0.0.3 ping statistics --- 00:32:48.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.266 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:32:48.266 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:48.266 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.124 ms 00:32:48.266 00:32:48.266 --- 10.0.0.4 ping statistics --- 00:32:48.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.266 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # return 0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:48.266 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:32:48.267 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:48.524 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:32:48.525 ' 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=85163 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 85163 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85163 ']' 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.525 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:48.525 [2024-12-05 11:16:13.068856] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:48.525 [2024-12-05 11:16:13.068951] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.782 [2024-12-05 11:16:13.228842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.782 [2024-12-05 11:16:13.291987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.782 [2024-12-05 11:16:13.292064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.782 [2024-12-05 11:16:13.292084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.782 [2024-12-05 11:16:13.292100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.782 [2024-12-05 11:16:13.292116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.782 [2024-12-05 11:16:13.292495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.1Wu 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.1Wu 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.1Wu 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.1Wu 00:32:49.714 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:49.714 [2024-12-05 11:16:14.302998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.714 [2024-12-05 11:16:14.318932] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:49.714 [2024-12-05 11:16:14.319118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.714 malloc0 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85224 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85224 /var/tmp/bdevperf.sock 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85224 ']' 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.971 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:49.971 [2024-12-05 11:16:14.437705] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:32:49.971 [2024-12-05 11:16:14.437942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85224 ] 00:32:49.971 [2024-12-05 11:16:14.587103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.228 [2024-12-05 11:16:14.645813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.228 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.228 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:32:50.228 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.1Wu 00:32:50.793 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:32:50.793 [2024-12-05 11:16:15.345182] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:50.793 TLSTESTn1 00:32:50.793 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:51.050 Running I/O for 10 seconds... 00:32:52.953 4779.00 IOPS, 18.67 MiB/s [2024-12-05T11:16:18.551Z] 4664.50 IOPS, 18.22 MiB/s [2024-12-05T11:16:19.931Z] 4795.33 IOPS, 18.73 MiB/s [2024-12-05T11:16:20.868Z] 4894.00 IOPS, 19.12 MiB/s [2024-12-05T11:16:21.806Z] 4955.00 IOPS, 19.36 MiB/s [2024-12-05T11:16:22.742Z] 4999.33 IOPS, 19.53 MiB/s [2024-12-05T11:16:23.755Z] 5038.14 IOPS, 19.68 MiB/s [2024-12-05T11:16:24.690Z] 5065.75 IOPS, 19.79 MiB/s [2024-12-05T11:16:25.624Z] 5078.22 IOPS, 19.84 MiB/s [2024-12-05T11:16:25.624Z] 5102.60 IOPS, 19.93 MiB/s 00:33:00.972 Latency(us) 00:33:00.972 [2024-12-05T11:16:25.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.972 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:00.972 Verification LBA range: start 0x0 length 0x2000 00:33:00.972 TLSTESTn1 : 10.01 5108.97 19.96 0.00 0.00 25016.15 4213.03 24341.94 00:33:00.972 [2024-12-05T11:16:25.624Z] =================================================================================================================== 00:33:00.972 [2024-12-05T11:16:25.624Z] Total : 5108.97 19.96 0.00 0.00 25016.15 4213.03 24341.94 00:33:00.972 { 00:33:00.972 "results": [ 00:33:00.972 { 00:33:00.972 "job": "TLSTESTn1", 00:33:00.972 "core_mask": "0x4", 00:33:00.972 "workload": "verify", 00:33:00.972 "status": "finished", 00:33:00.972 "verify_range": { 00:33:00.972 "start": 0, 00:33:00.972 "length": 8192 00:33:00.972 }, 00:33:00.972 "queue_depth": 128, 00:33:00.972 "io_size": 4096, 00:33:00.972 "runtime": 10.011995, 00:33:00.972 "iops": 5108.971788339886, 00:33:00.972 "mibps": 19.95692104820268, 00:33:00.972 "io_failed": 0, 00:33:00.972 "io_timeout": 0, 00:33:00.972 "avg_latency_us": 25016.153659854903, 00:33:00.972 "min_latency_us": 4213.028571428571, 00:33:00.972 "max_latency_us": 24341.942857142858 00:33:00.972 } 00:33:00.972 ], 00:33:00.972 "core_count": 1 00:33:00.972 } 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:00.972 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:00.972 nvmf_trace.0 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85224 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85224 ']' 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85224 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85224 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:01.230 killing process with pid 85224 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85224' 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85224 00:33:01.230 Received shutdown signal, test time was about 10.000000 seconds 00:33:01.230 00:33:01.230 Latency(us) 00:33:01.230 [2024-12-05T11:16:25.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.230 [2024-12-05T11:16:25.882Z] =================================================================================================================== 00:33:01.230 [2024-12-05T11:16:25.882Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.230 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85224 00:33:01.489 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:33:01.489 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:01.489 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:01.489 rmmod nvme_tcp 00:33:01.489 rmmod nvme_fabrics 00:33:01.489 rmmod nvme_keyring 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 85163 ']' 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 85163 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85163 ']' 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85163 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.489 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85163 00:33:01.747 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:01.747 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:01.747 killing process with pid 85163 00:33:01.747 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85163' 00:33:01.747 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85163 00:33:01.747 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85163 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.1Wu 00:33:02.006 00:33:02.006 real 0m14.613s 00:33:02.006 user 0m18.280s 00:33:02.006 sys 0m6.890s 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:33:02.006 ************************************ 00:33:02.006 END TEST nvmf_fips 00:33:02.006 ************************************ 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.006 11:16:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:02.265 ************************************ 00:33:02.265 START TEST nvmf_control_msg_list 00:33:02.265 ************************************ 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:33:02.265 * Looking for test storage... 00:33:02.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.265 --rc genhtml_branch_coverage=1 00:33:02.265 --rc genhtml_function_coverage=1 00:33:02.265 --rc genhtml_legend=1 00:33:02.265 --rc geninfo_all_blocks=1 00:33:02.265 --rc geninfo_unexecuted_blocks=1 00:33:02.265 00:33:02.265 ' 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.265 --rc genhtml_branch_coverage=1 00:33:02.265 --rc genhtml_function_coverage=1 00:33:02.265 --rc genhtml_legend=1 00:33:02.265 --rc geninfo_all_blocks=1 00:33:02.265 --rc geninfo_unexecuted_blocks=1 00:33:02.265 00:33:02.265 ' 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.265 --rc genhtml_branch_coverage=1 00:33:02.265 --rc genhtml_function_coverage=1 00:33:02.265 --rc genhtml_legend=1 00:33:02.265 --rc geninfo_all_blocks=1 00:33:02.265 --rc geninfo_unexecuted_blocks=1 00:33:02.265 00:33:02.265 ' 00:33:02.265 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:02.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.266 --rc genhtml_branch_coverage=1 00:33:02.266 --rc genhtml_function_coverage=1 00:33:02.266 --rc genhtml_legend=1 00:33:02.266 --rc geninfo_all_blocks=1 00:33:02.266 --rc geninfo_unexecuted_blocks=1 00:33:02.266 00:33:02.266 ' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:02.266 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@223 -- # create_target_ns 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:02.266 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:02.267 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target0 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:02.525 10.0.0.1 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:02.525 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:02.525 10.0.0.2 00:33:02.525 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:02.525 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:02.525 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.525 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:02.525 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:02.525 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target1 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772163 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:02.526 10.0.0.3 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772164 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:02.526 10.0.0.4 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:02.526 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:02.527 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.527 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.527 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:02.527 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:02.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:33:02.787 00:33:02.787 --- 10.0.0.1 ping statistics --- 00:33:02.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.787 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:02.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:33:02.787 00:33:02.787 --- 10.0.0.2 ping statistics --- 00:33:02.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.787 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.787 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:02.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:02.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:33:02.788 00:33:02.788 --- 10.0.0.3 ping statistics --- 00:33:02.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.788 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:02.788 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:02.788 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:33:02.788 00:33:02.788 --- 10.0.0.4 ping statistics --- 00:33:02.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.788 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # return 0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:02.788 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:02.789 ' 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:02.789 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=85614 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 85614 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85614 ']' 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.047 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:03.048 [2024-12-05 11:16:27.494474] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:03.048 [2024-12-05 11:16:27.494553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.048 [2024-12-05 11:16:27.636825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.307 [2024-12-05 11:16:27.714247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.307 [2024-12-05 11:16:27.714309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.307 [2024-12-05 11:16:27.714320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.307 [2024-12-05 11:16:27.714329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.307 [2024-12-05 11:16:27.714337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.307 [2024-12-05 11:16:27.714734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.873 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:03.873 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:33:03.873 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:03.873 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:03.874 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:04.132 [2024-12-05 11:16:28.551713] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:04.132 Malloc0 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:04.132 [2024-12-05 11:16:28.599346] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85664 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85665 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85666 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:04.132 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85664 00:33:04.390 [2024-12-05 11:16:28.793745] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:04.390 [2024-12-05 11:16:28.804100] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:04.390 [2024-12-05 11:16:28.804297] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:05.324 Initializing NVMe Controllers 00:33:05.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:05.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:33:05.324 Initialization complete. Launching workers. 00:33:05.324 ======================================================== 00:33:05.324 Latency(us) 00:33:05.324 Device Information : IOPS MiB/s Average min max 00:33:05.324 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4435.95 17.33 225.10 96.48 641.47 00:33:05.324 ======================================================== 00:33:05.324 Total : 4435.95 17.33 225.10 96.48 641.47 00:33:05.324 00:33:05.324 Initializing NVMe Controllers 00:33:05.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:05.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:33:05.324 Initialization complete. Launching workers. 00:33:05.324 ======================================================== 00:33:05.324 Latency(us) 00:33:05.324 Device Information : IOPS MiB/s Average min max 00:33:05.324 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4399.00 17.18 227.05 122.15 535.73 00:33:05.324 ======================================================== 00:33:05.324 Total : 4399.00 17.18 227.05 122.15 535.73 00:33:05.324 00:33:05.324 Initializing NVMe Controllers 00:33:05.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:05.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:33:05.324 Initialization complete. Launching workers. 00:33:05.324 ======================================================== 00:33:05.324 Latency(us) 00:33:05.324 Device Information : IOPS MiB/s Average min max 00:33:05.324 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4405.00 17.21 226.75 150.26 594.48 00:33:05.324 ======================================================== 00:33:05.324 Total : 4405.00 17.21 226.75 150.26 594.48 00:33:05.324 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85665 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85666 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:05.324 rmmod nvme_tcp 00:33:05.324 rmmod nvme_fabrics 00:33:05.324 rmmod nvme_keyring 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 85614 ']' 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 85614 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85614 ']' 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85614 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.324 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85614 00:33:05.583 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.583 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.583 killing process with pid 85614 00:33:05.583 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85614' 00:33:05.583 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85614 00:33:05.583 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85614 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:33:05.841 00:33:05.841 real 0m3.826s 00:33:05.841 user 0m5.285s 00:33:05.841 sys 0m2.030s 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.841 ************************************ 00:33:05.841 END TEST nvmf_control_msg_list 00:33:05.841 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:33:05.841 ************************************ 00:33:06.099 11:16:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:33:06.099 11:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:06.099 11:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:06.100 ************************************ 00:33:06.100 START TEST nvmf_wait_for_buf 00:33:06.100 ************************************ 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:33:06.100 * Looking for test storage... 00:33:06.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:06.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.100 --rc genhtml_branch_coverage=1 00:33:06.100 --rc genhtml_function_coverage=1 00:33:06.100 --rc genhtml_legend=1 00:33:06.100 --rc geninfo_all_blocks=1 00:33:06.100 --rc geninfo_unexecuted_blocks=1 00:33:06.100 00:33:06.100 ' 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:06.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.100 --rc genhtml_branch_coverage=1 00:33:06.100 --rc genhtml_function_coverage=1 00:33:06.100 --rc genhtml_legend=1 00:33:06.100 --rc geninfo_all_blocks=1 00:33:06.100 --rc geninfo_unexecuted_blocks=1 00:33:06.100 00:33:06.100 ' 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:06.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.100 --rc genhtml_branch_coverage=1 00:33:06.100 --rc genhtml_function_coverage=1 00:33:06.100 --rc genhtml_legend=1 00:33:06.100 --rc geninfo_all_blocks=1 00:33:06.100 --rc geninfo_unexecuted_blocks=1 00:33:06.100 00:33:06.100 ' 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:06.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.100 --rc genhtml_branch_coverage=1 00:33:06.100 --rc genhtml_function_coverage=1 00:33:06.100 --rc genhtml_legend=1 00:33:06.100 --rc geninfo_all_blocks=1 00:33:06.100 --rc geninfo_unexecuted_blocks=1 00:33:06.100 00:33:06.100 ' 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.100 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.360 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:06.361 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@223 -- # create_target_ns 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:06.361 10.0.0.1 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:06.361 10.0.0.2 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:06.361 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target1 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:06.362 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772163 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:06.622 10.0.0.3 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772164 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:06.622 10.0.0.4 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:06.622 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:06.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:33:06.622 00:33:06.622 --- 10.0.0.1 ping statistics --- 00:33:06.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.623 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:06.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:33:06.623 00:33:06.623 --- 10.0.0.2 ping statistics --- 00:33:06.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.623 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:06.623 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:06.623 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:33:06.623 00:33:06.623 --- 10.0.0.3 ping statistics --- 00:33:06.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.623 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:06.623 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:06.623 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:33:06.623 00:33:06.623 --- 10.0.0.4 ping statistics --- 00:33:06.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.623 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # return 0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:06.623 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:06.624 ' 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:06.624 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=85905 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 85905 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85905 ']' 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.883 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:06.883 [2024-12-05 11:16:31.367455] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:06.883 [2024-12-05 11:16:31.367562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.883 [2024-12-05 11:16:31.518949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.144 [2024-12-05 11:16:31.585168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.144 [2024-12-05 11:16:31.585222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.144 [2024-12-05 11:16:31.585233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.144 [2024-12-05 11:16:31.585242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.144 [2024-12-05 11:16:31.585249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.144 [2024-12-05 11:16:31.585625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.144 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.402 Malloc0 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.402 [2024-12-05 11:16:31.849086] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:07.402 [2024-12-05 11:16:31.873234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.402 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:07.660 [2024-12-05 11:16:32.071720] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:09.032 Initializing NVMe Controllers 00:33:09.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:33:09.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:33:09.032 Initialization complete. Launching workers. 00:33:09.032 ======================================================== 00:33:09.032 Latency(us) 00:33:09.032 Device Information : IOPS MiB/s Average min max 00:33:09.032 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.50 15.94 32496.45 7987.33 63992.09 00:33:09.032 ======================================================== 00:33:09.032 Total : 127.50 15.94 32496.45 7987.33 63992.09 00:33:09.032 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:09.032 rmmod nvme_tcp 00:33:09.032 rmmod nvme_fabrics 00:33:09.032 rmmod nvme_keyring 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 85905 ']' 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 85905 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85905 ']' 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85905 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85905 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:09.032 killing process with pid 85905 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85905' 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85905 00:33:09.032 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85905 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:09.290 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:33:09.548 00:33:09.548 real 0m3.533s 00:33:09.548 user 0m2.786s 00:33:09.548 sys 0m1.060s 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:33:09.548 ************************************ 00:33:09.548 END TEST nvmf_wait_for_buf 00:33:09.548 ************************************ 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:09.548 ************************************ 00:33:09.548 START TEST nvmf_nsid 00:33:09.548 ************************************ 00:33:09.548 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:33:09.807 * Looking for test storage... 00:33:09.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:09.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.807 --rc genhtml_branch_coverage=1 00:33:09.807 --rc genhtml_function_coverage=1 00:33:09.807 --rc genhtml_legend=1 00:33:09.807 --rc geninfo_all_blocks=1 00:33:09.807 --rc geninfo_unexecuted_blocks=1 00:33:09.807 00:33:09.807 ' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:09.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.807 --rc genhtml_branch_coverage=1 00:33:09.807 --rc genhtml_function_coverage=1 00:33:09.807 --rc genhtml_legend=1 00:33:09.807 --rc geninfo_all_blocks=1 00:33:09.807 --rc geninfo_unexecuted_blocks=1 00:33:09.807 00:33:09.807 ' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:09.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.807 --rc genhtml_branch_coverage=1 00:33:09.807 --rc genhtml_function_coverage=1 00:33:09.807 --rc genhtml_legend=1 00:33:09.807 --rc geninfo_all_blocks=1 00:33:09.807 --rc geninfo_unexecuted_blocks=1 00:33:09.807 00:33:09.807 ' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:09.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.807 --rc genhtml_branch_coverage=1 00:33:09.807 --rc genhtml_function_coverage=1 00:33:09.807 --rc genhtml_legend=1 00:33:09.807 --rc geninfo_all_blocks=1 00:33:09.807 --rc geninfo_unexecuted_blocks=1 00:33:09.807 00:33:09.807 ' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.807 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:09.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@223 -- # create_target_ns 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target0 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:09.808 10.0.0.1 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:09.808 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:10.067 10.0.0.2 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target1 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772163 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:10.067 10.0.0.3 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772164 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:10.067 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:10.068 10.0.0.4 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:10.068 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:10.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:33:10.327 00:33:10.327 --- 10.0.0.1 ping statistics --- 00:33:10.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.327 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:10.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:33:10.327 00:33:10.327 --- 10.0.0.2 ping statistics --- 00:33:10.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.327 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:10.327 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:10.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:10.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:33:10.327 00:33:10.327 --- 10.0.0.3 ping statistics --- 00:33:10.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.328 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:10.328 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:10.328 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.121 ms 00:33:10.328 00:33:10.328 --- 10.0.0.4 ping statistics --- 00:33:10.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.328 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # return 0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:10.328 ' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=86180 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 86180 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86180 ']' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.328 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:33:10.598 [2024-12-05 11:16:34.984765] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:10.599 [2024-12-05 11:16:34.984898] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.599 [2024-12-05 11:16:35.152565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.599 [2024-12-05 11:16:35.241199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.599 [2024-12-05 11:16:35.241266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.599 [2024-12-05 11:16:35.241281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.599 [2024-12-05 11:16:35.241294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.599 [2024-12-05 11:16:35.241306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.599 [2024-12-05 11:16:35.241775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86229 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:11.531 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ad43b693-81d6-47dc-a8a2-aa9ebab714e4 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=78ed51d1-f3f0-47a7-ac2a-3822bb85a298 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4592fc86-717b-487b-8a0c-bd58a5804091 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.532 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:33:11.532 null0 00:33:11.532 null1 00:33:11.532 null2 00:33:11.532 [2024-12-05 11:16:36.143698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.532 [2024-12-05 11:16:36.147429] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:11.532 [2024-12-05 11:16:36.147672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86229 ] 00:33:11.532 [2024-12-05 11:16:36.167866] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.790 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.790 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86229 /var/tmp/tgt2.sock 00:33:11.790 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86229 ']' 00:33:11.790 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:33:11.790 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:33:11.791 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:33:11.791 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.791 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:33:11.791 [2024-12-05 11:16:36.310936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.791 [2024-12-05 11:16:36.374145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.050 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.050 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:33:12.050 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:33:12.619 [2024-12-05 11:16:37.039331] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.619 [2024-12-05 11:16:37.055408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:33:12.619 nvme0n1 nvme0n2 00:33:12.619 nvme1n1 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:33:12.619 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ad43b693-81d6-47dc-a8a2-aa9ebab714e4 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ad43b69381d647dca8a2aa9ebab714e4 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AD43B69381D647DCA8A2AA9EBAB714E4 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ AD43B69381D647DCA8A2AA9EBAB714E4 == \A\D\4\3\B\6\9\3\8\1\D\6\4\7\D\C\A\8\A\2\A\A\9\E\B\A\B\7\1\4\E\4 ]] 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 78ed51d1-f3f0-47a7-ac2a-3822bb85a298 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=78ed51d1f3f047a7ac2a3822bb85a298 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 78ED51D1F3F047A7AC2A3822BB85A298 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 78ED51D1F3F047A7AC2A3822BB85A298 == \7\8\E\D\5\1\D\1\F\3\F\0\4\7\A\7\A\C\2\A\3\8\2\2\B\B\8\5\A\2\9\8 ]] 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4592fc86-717b-487b-8a0c-bd58a5804091 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4592fc86717b487b8a0cbd58a5804091 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4592FC86717B487B8A0CBD58A5804091 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4592FC86717B487B8A0CBD58A5804091 == \4\5\9\2\F\C\8\6\7\1\7\B\4\8\7\B\8\A\0\C\B\D\5\8\A\5\8\0\4\0\9\1 ]] 00:33:13.998 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86229 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86229 ']' 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86229 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86229 00:33:14.257 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:14.257 killing process with pid 86229 00:33:14.258 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:14.258 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86229' 00:33:14.258 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86229 00:33:14.258 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86229 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:14.517 rmmod nvme_tcp 00:33:14.517 rmmod nvme_fabrics 00:33:14.517 rmmod nvme_keyring 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 86180 ']' 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 86180 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86180 ']' 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86180 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.517 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86180 00:33:14.776 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.776 killing process with pid 86180 00:33:14.776 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.776 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86180' 00:33:14.776 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86180 00:33:14.776 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86180 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:33:15.035 00:33:15.035 real 0m5.517s 00:33:15.035 user 0m8.070s 00:33:15.035 sys 0m1.837s 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.035 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:33:15.035 ************************************ 00:33:15.035 END TEST nvmf_nsid 00:33:15.035 ************************************ 00:33:15.295 11:16:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:15.295 ************************************ 00:33:15.295 END TEST nvmf_target_extra 00:33:15.295 ************************************ 00:33:15.295 00:33:15.295 real 7m29.586s 00:33:15.295 user 17m36.487s 00:33:15.295 sys 1m53.877s 00:33:15.295 11:16:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.295 11:16:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:15.295 11:16:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:33:15.295 11:16:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:15.295 11:16:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.295 11:16:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:15.295 ************************************ 00:33:15.295 START TEST nvmf_host 00:33:15.295 ************************************ 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:33:15.295 * Looking for test storage... 00:33:15.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.295 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:15.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.584 --rc genhtml_branch_coverage=1 00:33:15.584 --rc genhtml_function_coverage=1 00:33:15.584 --rc genhtml_legend=1 00:33:15.584 --rc geninfo_all_blocks=1 00:33:15.584 --rc geninfo_unexecuted_blocks=1 00:33:15.584 00:33:15.584 ' 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:15.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.584 --rc genhtml_branch_coverage=1 00:33:15.584 --rc genhtml_function_coverage=1 00:33:15.584 --rc genhtml_legend=1 00:33:15.584 --rc geninfo_all_blocks=1 00:33:15.584 --rc geninfo_unexecuted_blocks=1 00:33:15.584 00:33:15.584 ' 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:15.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.584 --rc genhtml_branch_coverage=1 00:33:15.584 --rc genhtml_function_coverage=1 00:33:15.584 --rc genhtml_legend=1 00:33:15.584 --rc geninfo_all_blocks=1 00:33:15.584 --rc geninfo_unexecuted_blocks=1 00:33:15.584 00:33:15.584 ' 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:15.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.584 --rc genhtml_branch_coverage=1 00:33:15.584 --rc genhtml_function_coverage=1 00:33:15.584 --rc genhtml_legend=1 00:33:15.584 --rc geninfo_all_blocks=1 00:33:15.584 --rc geninfo_unexecuted_blocks=1 00:33:15.584 00:33:15.584 ' 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.584 11:16:39 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:15.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 ************************************ 00:33:15.585 START TEST nvmf_multicontroller 00:33:15.585 ************************************ 00:33:15.585 11:16:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:33:15.585 * Looking for test storage... 00:33:15.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:15.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.585 --rc genhtml_branch_coverage=1 00:33:15.585 --rc genhtml_function_coverage=1 00:33:15.585 --rc genhtml_legend=1 00:33:15.585 --rc geninfo_all_blocks=1 00:33:15.585 --rc geninfo_unexecuted_blocks=1 00:33:15.585 00:33:15.585 ' 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:15.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.585 --rc genhtml_branch_coverage=1 00:33:15.585 --rc genhtml_function_coverage=1 00:33:15.585 --rc genhtml_legend=1 00:33:15.585 --rc geninfo_all_blocks=1 00:33:15.585 --rc geninfo_unexecuted_blocks=1 00:33:15.585 00:33:15.585 ' 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:15.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.585 --rc genhtml_branch_coverage=1 00:33:15.585 --rc genhtml_function_coverage=1 00:33:15.585 --rc genhtml_legend=1 00:33:15.585 --rc geninfo_all_blocks=1 00:33:15.585 --rc geninfo_unexecuted_blocks=1 00:33:15.585 00:33:15.585 ' 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:15.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.585 --rc genhtml_branch_coverage=1 00:33:15.585 --rc genhtml_function_coverage=1 00:33:15.585 --rc genhtml_legend=1 00:33:15.585 --rc geninfo_all_blocks=1 00:33:15.585 --rc geninfo_unexecuted_blocks=1 00:33:15.585 00:33:15.585 ' 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:15.585 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.586 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:15.847 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@223 -- # create_target_ns 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # return 0 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:15.847 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up target0 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:15.848 10.0.0.1 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:15.848 10.0.0.2 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:15.848 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:15.849 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up target1 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772163 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:16.109 10.0.0.3 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.109 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772164 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:16.110 10.0.0.4 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:16.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:33:16.110 00:33:16.110 --- 10.0.0.1 ping statistics --- 00:33:16.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.110 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target0 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:16.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:33:16.110 00:33:16.110 --- 10.0.0.2 ping statistics --- 00:33:16.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.110 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator1 00:33:16.110 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:16.111 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:16.111 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:33:16.111 00:33:16.111 --- 10.0.0.3 ping statistics --- 00:33:16.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.111 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:16.111 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:16.111 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:33:16.111 00:33:16.111 --- 10.0.0.4 ping statistics --- 00:33:16.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.111 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # return 0 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator0 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:16.111 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:16.371 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:16.371 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:16.371 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:16.371 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target0 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target0 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target1 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:16.372 ' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=86602 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 86602 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86602 ']' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:16.372 11:16:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:16.372 [2024-12-05 11:16:40.916182] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:16.372 [2024-12-05 11:16:40.916283] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.632 [2024-12-05 11:16:41.074603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.632 [2024-12-05 11:16:41.137797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.632 [2024-12-05 11:16:41.138076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.632 [2024-12-05 11:16:41.138106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.632 [2024-12-05 11:16:41.138120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.632 [2024-12-05 11:16:41.138131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.632 [2024-12-05 11:16:41.139167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.632 [2024-12-05 11:16:41.139237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.632 [2024-12-05 11:16:41.139237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:17.570 11:16:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.570 11:16:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:33:17.571 11:16:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:17.571 11:16:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.571 11:16:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 [2024-12-05 11:16:42.021012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 Malloc0 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 [2024-12-05 11:16:42.094285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 [2024-12-05 11:16:42.102153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 Malloc1 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86660 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86660 /var/tmp/bdevperf.sock 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86660 ']' 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:17.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:17.571 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.138 NVMe0n1 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.138 1 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.138 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.139 2024/12/05 11:16:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:33:18.139 request: 00:33:18.139 { 00:33:18.139 "method": "bdev_nvme_attach_controller", 00:33:18.139 "params": { 00:33:18.139 "name": "NVMe0", 00:33:18.139 "trtype": "tcp", 00:33:18.139 "traddr": "10.0.0.2", 00:33:18.139 "adrfam": "ipv4", 00:33:18.139 "trsvcid": "4420", 00:33:18.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.139 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:33:18.139 "hostaddr": "10.0.0.1", 00:33:18.139 "prchk_reftag": false, 00:33:18.139 "prchk_guard": false, 00:33:18.139 "hdgst": false, 00:33:18.139 "ddgst": false, 00:33:18.139 "allow_unrecognized_csi": false 00:33:18.139 } 00:33:18.139 } 00:33:18.139 Got JSON-RPC error response 00:33:18.139 GoRPCClient: error on JSON-RPC call 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.139 2024/12/05 11:16:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:33:18.139 request: 00:33:18.139 { 00:33:18.139 "method": "bdev_nvme_attach_controller", 00:33:18.139 "params": { 00:33:18.139 "name": "NVMe0", 00:33:18.139 "trtype": "tcp", 00:33:18.139 "traddr": "10.0.0.2", 00:33:18.139 "adrfam": "ipv4", 00:33:18.139 "trsvcid": "4420", 00:33:18.139 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:18.139 "hostaddr": "10.0.0.1", 00:33:18.139 "prchk_reftag": false, 00:33:18.139 "prchk_guard": false, 00:33:18.139 "hdgst": false, 00:33:18.139 "ddgst": false, 00:33:18.139 "allow_unrecognized_csi": false 00:33:18.139 } 00:33:18.139 } 00:33:18.139 Got JSON-RPC error response 00:33:18.139 GoRPCClient: error on JSON-RPC call 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.139 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.399 2024/12/05 11:16:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:33:18.399 request: 00:33:18.399 { 00:33:18.399 "method": "bdev_nvme_attach_controller", 00:33:18.399 "params": { 00:33:18.399 "name": "NVMe0", 00:33:18.399 "trtype": "tcp", 00:33:18.399 "traddr": "10.0.0.2", 00:33:18.399 "adrfam": "ipv4", 00:33:18.399 "trsvcid": "4420", 00:33:18.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.399 "hostaddr": "10.0.0.1", 00:33:18.399 "prchk_reftag": false, 00:33:18.399 "prchk_guard": false, 00:33:18.399 "hdgst": false, 00:33:18.399 "ddgst": false, 00:33:18.399 "multipath": "disable", 00:33:18.399 "allow_unrecognized_csi": false 00:33:18.399 } 00:33:18.399 } 00:33:18.399 Got JSON-RPC error response 00:33:18.399 GoRPCClient: error on JSON-RPC call 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.399 2024/12/05 11:16:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:33:18.399 request: 00:33:18.399 { 00:33:18.399 "method": "bdev_nvme_attach_controller", 00:33:18.399 "params": { 00:33:18.399 "name": "NVMe0", 00:33:18.399 "trtype": "tcp", 00:33:18.399 "traddr": "10.0.0.2", 00:33:18.399 "adrfam": "ipv4", 00:33:18.399 "trsvcid": "4420", 00:33:18.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.399 "hostaddr": "10.0.0.1", 00:33:18.399 "prchk_reftag": false, 00:33:18.399 "prchk_guard": false, 00:33:18.399 "hdgst": false, 00:33:18.399 "ddgst": false, 00:33:18.399 "multipath": "failover", 00:33:18.399 "allow_unrecognized_csi": false 00:33:18.399 } 00:33:18.399 } 00:33:18.399 Got JSON-RPC error response 00:33:18.399 GoRPCClient: error on JSON-RPC call 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.399 NVMe0n1 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.399 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.399 11:16:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:33:18.399 11:16:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:19.779 { 00:33:19.779 "results": [ 00:33:19.779 { 00:33:19.779 "job": "NVMe0n1", 00:33:19.779 "core_mask": "0x1", 00:33:19.779 "workload": "write", 00:33:19.779 "status": "finished", 00:33:19.779 "queue_depth": 128, 00:33:19.779 "io_size": 4096, 00:33:19.779 "runtime": 1.005153, 00:33:19.779 "iops": 23693.905305958397, 00:33:19.779 "mibps": 92.55431760139999, 00:33:19.779 "io_failed": 0, 00:33:19.779 "io_timeout": 0, 00:33:19.779 "avg_latency_us": 5394.450246812868, 00:33:19.779 "min_latency_us": 1856.8533333333332, 00:33:19.779 "max_latency_us": 10360.929523809524 00:33:19.779 } 00:33:19.779 ], 00:33:19.779 "core_count": 1 00:33:19.780 } 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.3 ]] 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.780 nvme1n1 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.3 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.780 nvme1n1 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:33:19.780 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.3 == \1\0\.\0\.\0\.\3 ]] 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86660 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86660 ']' 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86660 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86660 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.040 killing process with pid 86660 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86660' 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86660 00:33:20.040 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86660 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:33:20.300 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:33:20.300 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:33:20.300 [2024-12-05 11:16:42.231917] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:20.300 [2024-12-05 11:16:42.232033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86660 ] 00:33:20.301 [2024-12-05 11:16:42.390602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.301 [2024-12-05 11:16:42.476255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.301 [2024-12-05 11:16:42.971493] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 64993bd2-f669-43d9-800e-2269070a6620 already exists 00:33:20.301 [2024-12-05 11:16:42.971597] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:64993bd2-f669-43d9-800e-2269070a6620 alias for bdev NVMe1n1 00:33:20.301 [2024-12-05 11:16:42.971614] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:33:20.301 Running I/O for 1 seconds... 00:33:20.301 23688.00 IOPS, 92.53 MiB/s 00:33:20.301 Latency(us) 00:33:20.301 [2024-12-05T11:16:44.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.301 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:33:20.301 NVMe0n1 : 1.01 23693.91 92.55 0.00 0.00 5394.45 1856.85 10360.93 00:33:20.301 [2024-12-05T11:16:44.953Z] =================================================================================================================== 00:33:20.301 [2024-12-05T11:16:44.953Z] Total : 23693.91 92.55 0.00 0.00 5394.45 1856.85 10360.93 00:33:20.301 Received shutdown signal, test time was about 1.000000 seconds 00:33:20.301 00:33:20.301 Latency(us) 00:33:20.301 [2024-12-05T11:16:44.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.301 [2024-12-05T11:16:44.953Z] =================================================================================================================== 00:33:20.301 [2024-12-05T11:16:44.953Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:20.301 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:20.301 rmmod nvme_tcp 00:33:20.301 rmmod nvme_fabrics 00:33:20.301 rmmod nvme_keyring 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 86602 ']' 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 86602 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86602 ']' 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86602 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86602 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:20.301 killing process with pid 86602 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86602' 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86602 00:33:20.301 11:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86602 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@254 -- # local dev 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:20.880 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # continue 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # continue 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@274 -- # iptr 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-save 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-restore 00:33:20.881 ************************************ 00:33:20.881 END TEST nvmf_multicontroller 00:33:20.881 ************************************ 00:33:20.881 00:33:20.881 real 0m5.435s 00:33:20.881 user 0m15.482s 00:33:20.881 sys 0m1.608s 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.881 ************************************ 00:33:20.881 START TEST nvmf_aer 00:33:20.881 ************************************ 00:33:20.881 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:21.140 * Looking for test storage... 00:33:21.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.140 --rc genhtml_branch_coverage=1 00:33:21.140 --rc genhtml_function_coverage=1 00:33:21.140 --rc genhtml_legend=1 00:33:21.140 --rc geninfo_all_blocks=1 00:33:21.140 --rc geninfo_unexecuted_blocks=1 00:33:21.140 00:33:21.140 ' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.140 --rc genhtml_branch_coverage=1 00:33:21.140 --rc genhtml_function_coverage=1 00:33:21.140 --rc genhtml_legend=1 00:33:21.140 --rc geninfo_all_blocks=1 00:33:21.140 --rc geninfo_unexecuted_blocks=1 00:33:21.140 00:33:21.140 ' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.140 --rc genhtml_branch_coverage=1 00:33:21.140 --rc genhtml_function_coverage=1 00:33:21.140 --rc genhtml_legend=1 00:33:21.140 --rc geninfo_all_blocks=1 00:33:21.140 --rc geninfo_unexecuted_blocks=1 00:33:21.140 00:33:21.140 ' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:21.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.140 --rc genhtml_branch_coverage=1 00:33:21.140 --rc genhtml_function_coverage=1 00:33:21.140 --rc genhtml_legend=1 00:33:21.140 --rc geninfo_all_blocks=1 00:33:21.140 --rc geninfo_unexecuted_blocks=1 00:33:21.140 00:33:21.140 ' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:21.140 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:21.141 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@223 -- # create_target_ns 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # return 0 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up target0 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:33:21.141 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:21.401 10.0.0.1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:21.401 10.0.0.2 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up target1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772163 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:21.401 10.0.0.3 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.401 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772164 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:21.402 10.0.0.4 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:21.402 11:16:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:21.402 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:21.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:33:21.662 00:33:21.662 --- 10.0.0.1 ping statistics --- 00:33:21.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.662 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target0 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:21.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:33:21.662 00:33:21.662 --- 10.0.0.2 ping statistics --- 00:33:21.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.662 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:21.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:21.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:33:21.662 00:33:21.662 --- 10.0.0.3 ping statistics --- 00:33:21.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.662 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:21.662 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:21.663 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:21.663 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:33:21.663 00:33:21.663 --- 10.0.0.4 ping statistics --- 00:33:21.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.663 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # return 0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target0 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target1 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:21.663 ' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=86965 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 86965 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 86965 ']' 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.663 11:16:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:21.923 [2024-12-05 11:16:46.325688] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:21.923 [2024-12-05 11:16:46.325796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.923 [2024-12-05 11:16:46.487829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:22.183 [2024-12-05 11:16:46.581363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.183 [2024-12-05 11:16:46.581429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.183 [2024-12-05 11:16:46.581445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.183 [2024-12-05 11:16:46.581459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.183 [2024-12-05 11:16:46.581470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.183 [2024-12-05 11:16:46.583061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.183 [2024-12-05 11:16:46.583197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:22.183 [2024-12-05 11:16:46.583205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.183 [2024-12-05 11:16:46.583114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.751 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.012 [2024-12-05 11:16:47.412365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.012 Malloc0 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.012 [2024-12-05 11:16:47.484836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.012 [ 00:33:23.012 { 00:33:23.012 "allow_any_host": true, 00:33:23.012 "hosts": [], 00:33:23.012 "listen_addresses": [], 00:33:23.012 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:23.012 "subtype": "Discovery" 00:33:23.012 }, 00:33:23.012 { 00:33:23.012 "allow_any_host": true, 00:33:23.012 "hosts": [], 00:33:23.012 "listen_addresses": [ 00:33:23.012 { 00:33:23.012 "adrfam": "IPv4", 00:33:23.012 "traddr": "10.0.0.2", 00:33:23.012 "trsvcid": "4420", 00:33:23.012 "trtype": "TCP" 00:33:23.012 } 00:33:23.012 ], 00:33:23.012 "max_cntlid": 65519, 00:33:23.012 "max_namespaces": 2, 00:33:23.012 "min_cntlid": 1, 00:33:23.012 "model_number": "SPDK bdev Controller", 00:33:23.012 "namespaces": [ 00:33:23.012 { 00:33:23.012 "bdev_name": "Malloc0", 00:33:23.012 "name": "Malloc0", 00:33:23.012 "nguid": "1EBD64E77DCF4D4D8F443319E3CF5E7B", 00:33:23.012 "nsid": 1, 00:33:23.012 "uuid": "1ebd64e7-7dcf-4d4d-8f44-3319e3cf5e7b" 00:33:23.012 } 00:33:23.012 ], 00:33:23.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.012 "serial_number": "SPDK00000000000001", 00:33:23.012 "subtype": "NVMe" 00:33:23.012 } 00:33:23.012 ] 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=87019 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:33:23.012 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.272 Malloc1 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.272 Asynchronous Event Request test 00:33:23.272 Attaching to 10.0.0.2 00:33:23.272 Attached to 10.0.0.2 00:33:23.272 Registering asynchronous event callbacks... 00:33:23.272 Starting namespace attribute notice tests for all controllers... 00:33:23.272 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:33:23.272 aer_cb - Changed Namespace 00:33:23.272 Cleaning up... 00:33:23.272 [ 00:33:23.272 { 00:33:23.272 "allow_any_host": true, 00:33:23.272 "hosts": [], 00:33:23.272 "listen_addresses": [], 00:33:23.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:23.272 "subtype": "Discovery" 00:33:23.272 }, 00:33:23.272 { 00:33:23.272 "allow_any_host": true, 00:33:23.272 "hosts": [], 00:33:23.272 "listen_addresses": [ 00:33:23.272 { 00:33:23.272 "adrfam": "IPv4", 00:33:23.272 "traddr": "10.0.0.2", 00:33:23.272 "trsvcid": "4420", 00:33:23.272 "trtype": "TCP" 00:33:23.272 } 00:33:23.272 ], 00:33:23.272 "max_cntlid": 65519, 00:33:23.272 "max_namespaces": 2, 00:33:23.272 "min_cntlid": 1, 00:33:23.272 "model_number": "SPDK bdev Controller", 00:33:23.272 "namespaces": [ 00:33:23.272 { 00:33:23.272 "bdev_name": "Malloc0", 00:33:23.272 "name": "Malloc0", 00:33:23.272 "nguid": "1EBD64E77DCF4D4D8F443319E3CF5E7B", 00:33:23.272 "nsid": 1, 00:33:23.272 "uuid": "1ebd64e7-7dcf-4d4d-8f44-3319e3cf5e7b" 00:33:23.272 }, 00:33:23.272 { 00:33:23.272 "bdev_name": "Malloc1", 00:33:23.272 "name": "Malloc1", 00:33:23.272 "nguid": "FE4B92145BFC491CBCE43923236B7700", 00:33:23.272 "nsid": 2, 00:33:23.272 "uuid": "fe4b9214-5bfc-491c-bce4-3923236b7700" 00:33:23.272 } 00:33:23.272 ], 00:33:23.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.272 "serial_number": "SPDK00000000000001", 00:33:23.272 "subtype": "NVMe" 00:33:23.272 } 00:33:23.272 ] 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.272 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 87019 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:23.273 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:33:23.532 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:23.532 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:33:23.532 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:23.532 11:16:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:23.532 rmmod nvme_tcp 00:33:23.532 rmmod nvme_fabrics 00:33:23.532 rmmod nvme_keyring 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 86965 ']' 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 86965 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 86965 ']' 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 86965 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86965 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:23.532 killing process with pid 86965 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86965' 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 86965 00:33:23.532 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 86965 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@254 -- # local dev 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:23.791 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # continue 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # continue 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@274 -- # iptr 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-save 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:24.050 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-restore 00:33:24.050 00:33:24.050 real 0m3.052s 00:33:24.051 user 0m7.487s 00:33:24.051 sys 0m1.038s 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:24.051 ************************************ 00:33:24.051 END TEST nvmf_aer 00:33:24.051 ************************************ 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.051 ************************************ 00:33:24.051 START TEST nvmf_async_init 00:33:24.051 ************************************ 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:33:24.051 * Looking for test storage... 00:33:24.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:33:24.051 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:24.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.311 --rc genhtml_branch_coverage=1 00:33:24.311 --rc genhtml_function_coverage=1 00:33:24.311 --rc genhtml_legend=1 00:33:24.311 --rc geninfo_all_blocks=1 00:33:24.311 --rc geninfo_unexecuted_blocks=1 00:33:24.311 00:33:24.311 ' 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:24.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.311 --rc genhtml_branch_coverage=1 00:33:24.311 --rc genhtml_function_coverage=1 00:33:24.311 --rc genhtml_legend=1 00:33:24.311 --rc geninfo_all_blocks=1 00:33:24.311 --rc geninfo_unexecuted_blocks=1 00:33:24.311 00:33:24.311 ' 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:24.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.311 --rc genhtml_branch_coverage=1 00:33:24.311 --rc genhtml_function_coverage=1 00:33:24.311 --rc genhtml_legend=1 00:33:24.311 --rc geninfo_all_blocks=1 00:33:24.311 --rc geninfo_unexecuted_blocks=1 00:33:24.311 00:33:24.311 ' 00:33:24.311 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:24.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.311 --rc genhtml_branch_coverage=1 00:33:24.311 --rc genhtml_function_coverage=1 00:33:24.311 --rc genhtml_legend=1 00:33:24.311 --rc geninfo_all_blocks=1 00:33:24.311 --rc geninfo_unexecuted_blocks=1 00:33:24.311 00:33:24.311 ' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:24.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=881239b3ff7040c7bb0f5aebb9c7eeda 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@223 -- # create_target_ns 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # return 0 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:24.312 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up target0 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:24.313 10.0.0.1 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:24.313 10.0.0.2 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:24.313 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:24.590 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:24.590 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:24.590 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:24.591 11:16:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:24.591 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up target1 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772163 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:24.592 10.0.0.3 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772164 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:24.592 10.0.0.4 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:24.592 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:24.593 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator0 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:24.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:33:24.594 00:33:24.594 --- 10.0.0.1 ping statistics --- 00:33:24.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.594 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:24.594 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target0 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target0 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:24.595 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:24.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:33:24.596 00:33:24.596 --- 10.0.0.2 ping statistics --- 00:33:24.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.596 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator1 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:24.596 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:24.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:24.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:33:24.596 00:33:24.596 --- 10.0.0.3 ping statistics --- 00:33:24.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.597 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target1 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target1 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:24.597 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:24.918 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:24.918 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:33:24.918 00:33:24.918 --- 10.0.0.4 ping statistics --- 00:33:24.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.918 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # return 0 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator0 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator1 00:33:24.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target0 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target0 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:24.919 ' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=87250 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 87250 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 87250 ']' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.919 11:16:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:24.919 [2024-12-05 11:16:49.427894] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:24.919 [2024-12-05 11:16:49.427994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.197 [2024-12-05 11:16:49.586054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.197 [2024-12-05 11:16:49.674876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.197 [2024-12-05 11:16:49.674942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.197 [2024-12-05 11:16:49.674958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.197 [2024-12-05 11:16:49.674972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.197 [2024-12-05 11:16:49.674984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.197 [2024-12-05 11:16:49.675439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 [2024-12-05 11:16:50.495894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 null0 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 881239b3ff7040c7bb0f5aebb9c7eeda 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 [2024-12-05 11:16:50.536084] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 nvme0n1 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.133 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.133 [ 00:33:26.133 { 00:33:26.133 "aliases": [ 00:33:26.133 "881239b3-ff70-40c7-bb0f-5aebb9c7eeda" 00:33:26.133 ], 00:33:26.133 "assigned_rate_limits": { 00:33:26.133 "r_mbytes_per_sec": 0, 00:33:26.133 "rw_ios_per_sec": 0, 00:33:26.133 "rw_mbytes_per_sec": 0, 00:33:26.133 "w_mbytes_per_sec": 0 00:33:26.133 }, 00:33:26.133 "block_size": 512, 00:33:26.133 "claimed": false, 00:33:26.133 "driver_specific": { 00:33:26.393 "mp_policy": "active_passive", 00:33:26.393 "nvme": [ 00:33:26.393 { 00:33:26.393 "ctrlr_data": { 00:33:26.393 "ana_reporting": false, 00:33:26.393 "cntlid": 1, 00:33:26.393 "firmware_revision": "25.01", 00:33:26.393 "model_number": "SPDK bdev Controller", 00:33:26.393 "multi_ctrlr": true, 00:33:26.393 "oacs": { 00:33:26.393 "firmware": 0, 00:33:26.393 "format": 0, 00:33:26.393 "ns_manage": 0, 00:33:26.393 "security": 0 00:33:26.393 }, 00:33:26.393 "serial_number": "00000000000000000000", 00:33:26.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.393 "vendor_id": "0x8086" 00:33:26.393 }, 00:33:26.393 "ns_data": { 00:33:26.393 "can_share": true, 00:33:26.393 "id": 1 00:33:26.393 }, 00:33:26.393 "trid": { 00:33:26.393 "adrfam": "IPv4", 00:33:26.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.393 "traddr": "10.0.0.2", 00:33:26.393 "trsvcid": "4420", 00:33:26.393 "trtype": "TCP" 00:33:26.393 }, 00:33:26.393 "vs": { 00:33:26.393 "nvme_version": "1.3" 00:33:26.393 } 00:33:26.393 } 00:33:26.393 ] 00:33:26.393 }, 00:33:26.393 "memory_domains": [ 00:33:26.393 { 00:33:26.393 "dma_device_id": "system", 00:33:26.393 "dma_device_type": 1 00:33:26.393 } 00:33:26.393 ], 00:33:26.393 "name": "nvme0n1", 00:33:26.393 "num_blocks": 2097152, 00:33:26.393 "numa_id": -1, 00:33:26.393 "product_name": "NVMe disk", 00:33:26.393 "supported_io_types": { 00:33:26.393 "abort": true, 00:33:26.393 "compare": true, 00:33:26.393 "compare_and_write": true, 00:33:26.393 "copy": true, 00:33:26.393 "flush": true, 00:33:26.393 "get_zone_info": false, 00:33:26.393 "nvme_admin": true, 00:33:26.393 "nvme_io": true, 00:33:26.393 "nvme_io_md": false, 00:33:26.393 "nvme_iov_md": false, 00:33:26.393 "read": true, 00:33:26.393 "reset": true, 00:33:26.393 "seek_data": false, 00:33:26.393 "seek_hole": false, 00:33:26.393 "unmap": false, 00:33:26.393 "write": true, 00:33:26.393 "write_zeroes": true, 00:33:26.393 "zcopy": false, 00:33:26.393 "zone_append": false, 00:33:26.393 "zone_management": false 00:33:26.393 }, 00:33:26.393 "uuid": "881239b3-ff70-40c7-bb0f-5aebb9c7eeda", 00:33:26.393 "zoned": false 00:33:26.393 } 00:33:26.393 ] 00:33:26.393 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.393 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:33:26.393 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.393 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.393 [2024-12-05 11:16:50.809066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:26.393 [2024-12-05 11:16:50.809153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1946660 (9): Bad file descriptor 00:33:26.394 [2024-12-05 11:16:50.940752] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.394 [ 00:33:26.394 { 00:33:26.394 "aliases": [ 00:33:26.394 "881239b3-ff70-40c7-bb0f-5aebb9c7eeda" 00:33:26.394 ], 00:33:26.394 "assigned_rate_limits": { 00:33:26.394 "r_mbytes_per_sec": 0, 00:33:26.394 "rw_ios_per_sec": 0, 00:33:26.394 "rw_mbytes_per_sec": 0, 00:33:26.394 "w_mbytes_per_sec": 0 00:33:26.394 }, 00:33:26.394 "block_size": 512, 00:33:26.394 "claimed": false, 00:33:26.394 "driver_specific": { 00:33:26.394 "mp_policy": "active_passive", 00:33:26.394 "nvme": [ 00:33:26.394 { 00:33:26.394 "ctrlr_data": { 00:33:26.394 "ana_reporting": false, 00:33:26.394 "cntlid": 2, 00:33:26.394 "firmware_revision": "25.01", 00:33:26.394 "model_number": "SPDK bdev Controller", 00:33:26.394 "multi_ctrlr": true, 00:33:26.394 "oacs": { 00:33:26.394 "firmware": 0, 00:33:26.394 "format": 0, 00:33:26.394 "ns_manage": 0, 00:33:26.394 "security": 0 00:33:26.394 }, 00:33:26.394 "serial_number": "00000000000000000000", 00:33:26.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.394 "vendor_id": "0x8086" 00:33:26.394 }, 00:33:26.394 "ns_data": { 00:33:26.394 "can_share": true, 00:33:26.394 "id": 1 00:33:26.394 }, 00:33:26.394 "trid": { 00:33:26.394 "adrfam": "IPv4", 00:33:26.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.394 "traddr": "10.0.0.2", 00:33:26.394 "trsvcid": "4420", 00:33:26.394 "trtype": "TCP" 00:33:26.394 }, 00:33:26.394 "vs": { 00:33:26.394 "nvme_version": "1.3" 00:33:26.394 } 00:33:26.394 } 00:33:26.394 ] 00:33:26.394 }, 00:33:26.394 "memory_domains": [ 00:33:26.394 { 00:33:26.394 "dma_device_id": "system", 00:33:26.394 "dma_device_type": 1 00:33:26.394 } 00:33:26.394 ], 00:33:26.394 "name": "nvme0n1", 00:33:26.394 "num_blocks": 2097152, 00:33:26.394 "numa_id": -1, 00:33:26.394 "product_name": "NVMe disk", 00:33:26.394 "supported_io_types": { 00:33:26.394 "abort": true, 00:33:26.394 "compare": true, 00:33:26.394 "compare_and_write": true, 00:33:26.394 "copy": true, 00:33:26.394 "flush": true, 00:33:26.394 "get_zone_info": false, 00:33:26.394 "nvme_admin": true, 00:33:26.394 "nvme_io": true, 00:33:26.394 "nvme_io_md": false, 00:33:26.394 "nvme_iov_md": false, 00:33:26.394 "read": true, 00:33:26.394 "reset": true, 00:33:26.394 "seek_data": false, 00:33:26.394 "seek_hole": false, 00:33:26.394 "unmap": false, 00:33:26.394 "write": true, 00:33:26.394 "write_zeroes": true, 00:33:26.394 "zcopy": false, 00:33:26.394 "zone_append": false, 00:33:26.394 "zone_management": false 00:33:26.394 }, 00:33:26.394 "uuid": "881239b3-ff70-40c7-bb0f-5aebb9c7eeda", 00:33:26.394 "zoned": false 00:33:26.394 } 00:33:26.394 ] 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.394 11:16:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.vzZsjbgHH0 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.vzZsjbgHH0 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.vzZsjbgHH0 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.394 [2024-12-05 11:16:51.029205] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:26.394 [2024-12-05 11:16:51.029420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.394 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.394 [2024-12-05 11:16:51.045211] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:26.654 nvme0n1 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.654 [ 00:33:26.654 { 00:33:26.654 "aliases": [ 00:33:26.654 "881239b3-ff70-40c7-bb0f-5aebb9c7eeda" 00:33:26.654 ], 00:33:26.654 "assigned_rate_limits": { 00:33:26.654 "r_mbytes_per_sec": 0, 00:33:26.654 "rw_ios_per_sec": 0, 00:33:26.654 "rw_mbytes_per_sec": 0, 00:33:26.654 "w_mbytes_per_sec": 0 00:33:26.654 }, 00:33:26.654 "block_size": 512, 00:33:26.654 "claimed": false, 00:33:26.654 "driver_specific": { 00:33:26.654 "mp_policy": "active_passive", 00:33:26.654 "nvme": [ 00:33:26.654 { 00:33:26.654 "ctrlr_data": { 00:33:26.654 "ana_reporting": false, 00:33:26.654 "cntlid": 3, 00:33:26.654 "firmware_revision": "25.01", 00:33:26.654 "model_number": "SPDK bdev Controller", 00:33:26.654 "multi_ctrlr": true, 00:33:26.654 "oacs": { 00:33:26.654 "firmware": 0, 00:33:26.654 "format": 0, 00:33:26.654 "ns_manage": 0, 00:33:26.654 "security": 0 00:33:26.654 }, 00:33:26.654 "serial_number": "00000000000000000000", 00:33:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.654 "vendor_id": "0x8086" 00:33:26.654 }, 00:33:26.654 "ns_data": { 00:33:26.654 "can_share": true, 00:33:26.654 "id": 1 00:33:26.654 }, 00:33:26.654 "trid": { 00:33:26.654 "adrfam": "IPv4", 00:33:26.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:26.654 "traddr": "10.0.0.2", 00:33:26.654 "trsvcid": "4421", 00:33:26.654 "trtype": "TCP" 00:33:26.654 }, 00:33:26.654 "vs": { 00:33:26.654 "nvme_version": "1.3" 00:33:26.654 } 00:33:26.654 } 00:33:26.654 ] 00:33:26.654 }, 00:33:26.654 "memory_domains": [ 00:33:26.654 { 00:33:26.654 "dma_device_id": "system", 00:33:26.654 "dma_device_type": 1 00:33:26.654 } 00:33:26.654 ], 00:33:26.654 "name": "nvme0n1", 00:33:26.654 "num_blocks": 2097152, 00:33:26.654 "numa_id": -1, 00:33:26.654 "product_name": "NVMe disk", 00:33:26.654 "supported_io_types": { 00:33:26.654 "abort": true, 00:33:26.654 "compare": true, 00:33:26.654 "compare_and_write": true, 00:33:26.654 "copy": true, 00:33:26.654 "flush": true, 00:33:26.654 "get_zone_info": false, 00:33:26.654 "nvme_admin": true, 00:33:26.654 "nvme_io": true, 00:33:26.654 "nvme_io_md": false, 00:33:26.654 "nvme_iov_md": false, 00:33:26.654 "read": true, 00:33:26.654 "reset": true, 00:33:26.654 "seek_data": false, 00:33:26.654 "seek_hole": false, 00:33:26.654 "unmap": false, 00:33:26.654 "write": true, 00:33:26.654 "write_zeroes": true, 00:33:26.654 "zcopy": false, 00:33:26.654 "zone_append": false, 00:33:26.654 "zone_management": false 00:33:26.654 }, 00:33:26.654 "uuid": "881239b3-ff70-40c7-bb0f-5aebb9c7eeda", 00:33:26.654 "zoned": false 00:33:26.654 } 00:33:26.654 ] 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.vzZsjbgHH0 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:33:26.654 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:26.655 rmmod nvme_tcp 00:33:26.655 rmmod nvme_fabrics 00:33:26.655 rmmod nvme_keyring 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 87250 ']' 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 87250 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 87250 ']' 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 87250 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.655 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87250 00:33:26.914 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:26.914 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:26.914 killing process with pid 87250 00:33:26.914 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87250' 00:33:26.914 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 87250 00:33:26.914 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 87250 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@254 -- # local dev 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # continue 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # continue 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@274 -- # iptr 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-save 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-restore 00:33:27.174 00:33:27.174 real 0m3.175s 00:33:27.174 user 0m2.730s 00:33:27.174 sys 0m1.045s 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:27.174 ************************************ 00:33:27.174 END TEST nvmf_async_init 00:33:27.174 ************************************ 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.174 ************************************ 00:33:27.174 START TEST dma 00:33:27.174 ************************************ 00:33:27.174 11:16:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:27.434 * Looking for test storage... 00:33:27.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:27.434 11:16:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:27.434 11:16:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:33:27.434 11:16:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:27.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.434 --rc genhtml_branch_coverage=1 00:33:27.434 --rc genhtml_function_coverage=1 00:33:27.434 --rc genhtml_legend=1 00:33:27.434 --rc geninfo_all_blocks=1 00:33:27.434 --rc geninfo_unexecuted_blocks=1 00:33:27.434 00:33:27.434 ' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:27.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.434 --rc genhtml_branch_coverage=1 00:33:27.434 --rc genhtml_function_coverage=1 00:33:27.434 --rc genhtml_legend=1 00:33:27.434 --rc geninfo_all_blocks=1 00:33:27.434 --rc geninfo_unexecuted_blocks=1 00:33:27.434 00:33:27.434 ' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:27.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.434 --rc genhtml_branch_coverage=1 00:33:27.434 --rc genhtml_function_coverage=1 00:33:27.434 --rc genhtml_legend=1 00:33:27.434 --rc geninfo_all_blocks=1 00:33:27.434 --rc geninfo_unexecuted_blocks=1 00:33:27.434 00:33:27.434 ' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:27.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.434 --rc genhtml_branch_coverage=1 00:33:27.434 --rc genhtml_function_coverage=1 00:33:27.434 --rc genhtml_legend=1 00:33:27.434 --rc geninfo_all_blocks=1 00:33:27.434 --rc geninfo_unexecuted_blocks=1 00:33:27.434 00:33:27.434 ' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:27.434 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@50 -- # : 0 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:27.435 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:33:27.435 00:33:27.435 real 0m0.237s 00:33:27.435 user 0m0.132s 00:33:27.435 sys 0m0.118s 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.435 11:16:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:27.435 ************************************ 00:33:27.435 END TEST dma 00:33:27.435 ************************************ 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.694 ************************************ 00:33:27.694 START TEST nvmf_identify 00:33:27.694 ************************************ 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:27.694 * Looking for test storage... 00:33:27.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:33:27.694 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.695 --rc genhtml_branch_coverage=1 00:33:27.695 --rc genhtml_function_coverage=1 00:33:27.695 --rc genhtml_legend=1 00:33:27.695 --rc geninfo_all_blocks=1 00:33:27.695 --rc geninfo_unexecuted_blocks=1 00:33:27.695 00:33:27.695 ' 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.695 --rc genhtml_branch_coverage=1 00:33:27.695 --rc genhtml_function_coverage=1 00:33:27.695 --rc genhtml_legend=1 00:33:27.695 --rc geninfo_all_blocks=1 00:33:27.695 --rc geninfo_unexecuted_blocks=1 00:33:27.695 00:33:27.695 ' 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.695 --rc genhtml_branch_coverage=1 00:33:27.695 --rc genhtml_function_coverage=1 00:33:27.695 --rc genhtml_legend=1 00:33:27.695 --rc geninfo_all_blocks=1 00:33:27.695 --rc geninfo_unexecuted_blocks=1 00:33:27.695 00:33:27.695 ' 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.695 --rc genhtml_branch_coverage=1 00:33:27.695 --rc genhtml_function_coverage=1 00:33:27.695 --rc genhtml_legend=1 00:33:27.695 --rc geninfo_all_blocks=1 00:33:27.695 --rc geninfo_unexecuted_blocks=1 00:33:27.695 00:33:27.695 ' 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.695 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:27.955 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:27.955 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@223 -- # create_target_ns 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target0 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:27.956 10.0.0.1 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:27.956 10.0.0.2 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:27.956 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target1 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:27.957 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772163 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:28.218 10.0.0.3 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772164 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:28.218 10.0.0.4 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:28.218 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:28.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:33:28.219 00:33:28.219 --- 10.0.0.1 ping statistics --- 00:33:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.219 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:28.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:33:28.219 00:33:28.219 --- 10.0.0.2 ping statistics --- 00:33:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.219 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:28.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:28.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:33:28.219 00:33:28.219 --- 10.0.0.3 ping statistics --- 00:33:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.219 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:28.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:28.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.313 ms 00:33:28.219 00:33:28.219 --- 10.0.0.4 ping statistics --- 00:33:28.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.219 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # return 0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:28.219 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:28.220 ' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:28.220 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87579 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87579 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 87579 ']' 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.479 11:16:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.479 [2024-12-05 11:16:52.938755] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:28.479 [2024-12-05 11:16:52.938841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.479 [2024-12-05 11:16:53.084955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:28.738 [2024-12-05 11:16:53.167775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.738 [2024-12-05 11:16:53.168108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.738 [2024-12-05 11:16:53.168180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.738 [2024-12-05 11:16:53.168234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.738 [2024-12-05 11:16:53.168280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.738 [2024-12-05 11:16:53.169742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.738 [2024-12-05 11:16:53.169904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.738 [2024-12-05 11:16:53.170007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:28.738 [2024-12-05 11:16:53.170073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.738 [2024-12-05 11:16:53.350089] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:28.738 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.997 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:28.997 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.997 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.997 Malloc0 00:33:28.997 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.997 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:28.997 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.997 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 [2024-12-05 11:16:53.481745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:28.998 [ 00:33:28.998 { 00:33:28.998 "allow_any_host": true, 00:33:28.998 "hosts": [], 00:33:28.998 "listen_addresses": [ 00:33:28.998 { 00:33:28.998 "adrfam": "IPv4", 00:33:28.998 "traddr": "10.0.0.2", 00:33:28.998 "trsvcid": "4420", 00:33:28.998 "trtype": "TCP" 00:33:28.998 } 00:33:28.998 ], 00:33:28.998 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:28.998 "subtype": "Discovery" 00:33:28.998 }, 00:33:28.998 { 00:33:28.998 "allow_any_host": true, 00:33:28.998 "hosts": [], 00:33:28.998 "listen_addresses": [ 00:33:28.998 { 00:33:28.998 "adrfam": "IPv4", 00:33:28.998 "traddr": "10.0.0.2", 00:33:28.998 "trsvcid": "4420", 00:33:28.998 "trtype": "TCP" 00:33:28.998 } 00:33:28.998 ], 00:33:28.998 "max_cntlid": 65519, 00:33:28.998 "max_namespaces": 32, 00:33:28.998 "min_cntlid": 1, 00:33:28.998 "model_number": "SPDK bdev Controller", 00:33:28.998 "namespaces": [ 00:33:28.998 { 00:33:28.998 "bdev_name": "Malloc0", 00:33:28.998 "eui64": "ABCDEF0123456789", 00:33:28.998 "name": "Malloc0", 00:33:28.998 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:33:28.998 "nsid": 1, 00:33:28.998 "uuid": "f31bafbb-1517-4170-acc6-ace6583a63c0" 00:33:28.998 } 00:33:28.998 ], 00:33:28.998 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.998 "serial_number": "SPDK00000000000001", 00:33:28.998 "subtype": "NVMe" 00:33:28.998 } 00:33:28.998 ] 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.998 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:33:28.998 [2024-12-05 11:16:53.541351] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:28.998 [2024-12-05 11:16:53.541409] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87620 ] 00:33:29.261 [2024-12-05 11:16:53.700686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:33:29.261 [2024-12-05 11:16:53.700758] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:29.261 [2024-12-05 11:16:53.700765] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:29.261 [2024-12-05 11:16:53.700786] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:29.261 [2024-12-05 11:16:53.700799] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:29.261 [2024-12-05 11:16:53.701135] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:33:29.261 [2024-12-05 11:16:53.701188] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d4cd90 0 00:33:29.261 [2024-12-05 11:16:53.708608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:29.261 [2024-12-05 11:16:53.708630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:29.261 [2024-12-05 11:16:53.708635] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:29.261 [2024-12-05 11:16:53.708639] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:29.261 [2024-12-05 11:16:53.708689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.708696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.708700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.261 [2024-12-05 11:16:53.708719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:29.261 [2024-12-05 11:16:53.708750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.261 [2024-12-05 11:16:53.716607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.261 [2024-12-05 11:16:53.716622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.261 [2024-12-05 11:16:53.716626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.261 [2024-12-05 11:16:53.716644] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:29.261 [2024-12-05 11:16:53.716653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:33:29.261 [2024-12-05 11:16:53.716659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:33:29.261 [2024-12-05 11:16:53.716678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.261 [2024-12-05 11:16:53.716696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.261 [2024-12-05 11:16:53.716721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.261 [2024-12-05 11:16:53.716786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.261 [2024-12-05 11:16:53.716792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.261 [2024-12-05 11:16:53.716796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.261 [2024-12-05 11:16:53.716806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:33:29.261 [2024-12-05 11:16:53.716813] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:33:29.261 [2024-12-05 11:16:53.716820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.261 [2024-12-05 11:16:53.716833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.261 [2024-12-05 11:16:53.716849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.261 [2024-12-05 11:16:53.716901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.261 [2024-12-05 11:16:53.716907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.261 [2024-12-05 11:16:53.716911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.261 [2024-12-05 11:16:53.716921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:33:29.261 [2024-12-05 11:16:53.716929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:33:29.261 [2024-12-05 11:16:53.716936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.716943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.261 [2024-12-05 11:16:53.716949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.261 [2024-12-05 11:16:53.716964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.261 [2024-12-05 11:16:53.717012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.261 [2024-12-05 11:16:53.717018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.261 [2024-12-05 11:16:53.717022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.261 [2024-12-05 11:16:53.717031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:29.261 [2024-12-05 11:16:53.717040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.261 [2024-12-05 11:16:53.717054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.261 [2024-12-05 11:16:53.717068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.261 [2024-12-05 11:16:53.717117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.261 [2024-12-05 11:16:53.717123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.261 [2024-12-05 11:16:53.717126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.261 [2024-12-05 11:16:53.717135] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:33:29.261 [2024-12-05 11:16:53.717140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:33:29.261 [2024-12-05 11:16:53.717147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:29.261 [2024-12-05 11:16:53.717257] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:33:29.261 [2024-12-05 11:16:53.717263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:29.261 [2024-12-05 11:16:53.717273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.261 [2024-12-05 11:16:53.717287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.261 [2024-12-05 11:16:53.717302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.261 [2024-12-05 11:16:53.717351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.261 [2024-12-05 11:16:53.717356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.261 [2024-12-05 11:16:53.717360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.261 [2024-12-05 11:16:53.717369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:29.261 [2024-12-05 11:16:53.717378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.261 [2024-12-05 11:16:53.717392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.261 [2024-12-05 11:16:53.717407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.261 [2024-12-05 11:16:53.717460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.261 [2024-12-05 11:16:53.717466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.261 [2024-12-05 11:16:53.717469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.261 [2024-12-05 11:16:53.717473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.261 [2024-12-05 11:16:53.717478] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:29.261 [2024-12-05 11:16:53.717483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:33:29.262 [2024-12-05 11:16:53.717490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:33:29.262 [2024-12-05 11:16:53.717500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:33:29.262 [2024-12-05 11:16:53.717509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.717519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.262 [2024-12-05 11:16:53.717533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.262 [2024-12-05 11:16:53.717630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.262 [2024-12-05 11:16:53.717637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.262 [2024-12-05 11:16:53.717641] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717646] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4cd90): datao=0, datal=4096, cccid=0 00:33:29.262 [2024-12-05 11:16:53.717651] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8d600) on tqpair(0x1d4cd90): expected_datao=0, payload_size=4096 00:33:29.262 [2024-12-05 11:16:53.717657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717665] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717669] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.262 [2024-12-05 11:16:53.717683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.262 [2024-12-05 11:16:53.717687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.262 [2024-12-05 11:16:53.717699] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:33:29.262 [2024-12-05 11:16:53.717705] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:33:29.262 [2024-12-05 11:16:53.717709] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:33:29.262 [2024-12-05 11:16:53.717719] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:33:29.262 [2024-12-05 11:16:53.717724] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:33:29.262 [2024-12-05 11:16:53.717729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:33:29.262 [2024-12-05 11:16:53.717739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:33:29.262 [2024-12-05 11:16:53.717747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.717761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:29.262 [2024-12-05 11:16:53.717778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.262 [2024-12-05 11:16:53.717833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.262 [2024-12-05 11:16:53.717839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.262 [2024-12-05 11:16:53.717842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.262 [2024-12-05 11:16:53.717854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.717867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.262 [2024-12-05 11:16:53.717873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.717886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.262 [2024-12-05 11:16:53.717892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.717905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.262 [2024-12-05 11:16:53.717911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.717923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.262 [2024-12-05 11:16:53.717928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:29.262 [2024-12-05 11:16:53.717936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:29.262 [2024-12-05 11:16:53.717943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.717946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.717952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.262 [2024-12-05 11:16:53.717973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d600, cid 0, qid 0 00:33:29.262 [2024-12-05 11:16:53.717978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d780, cid 1, qid 0 00:33:29.262 [2024-12-05 11:16:53.717983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8d900, cid 2, qid 0 00:33:29.262 [2024-12-05 11:16:53.717988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.262 [2024-12-05 11:16:53.717992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8dc00, cid 4, qid 0 00:33:29.262 [2024-12-05 11:16:53.718063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.262 [2024-12-05 11:16:53.718069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.262 [2024-12-05 11:16:53.718072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8dc00) on tqpair=0x1d4cd90 00:33:29.262 [2024-12-05 11:16:53.718081] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:33:29.262 [2024-12-05 11:16:53.718087] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:33:29.262 [2024-12-05 11:16:53.718097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.718108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.262 [2024-12-05 11:16:53.718122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8dc00, cid 4, qid 0 00:33:29.262 [2024-12-05 11:16:53.718184] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.262 [2024-12-05 11:16:53.718190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.262 [2024-12-05 11:16:53.718194] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718197] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4cd90): datao=0, datal=4096, cccid=4 00:33:29.262 [2024-12-05 11:16:53.718202] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8dc00) on tqpair(0x1d4cd90): expected_datao=0, payload_size=4096 00:33:29.262 [2024-12-05 11:16:53.718208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718214] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718218] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.262 [2024-12-05 11:16:53.718231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.262 [2024-12-05 11:16:53.718234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8dc00) on tqpair=0x1d4cd90 00:33:29.262 [2024-12-05 11:16:53.718251] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:33:29.262 [2024-12-05 11:16:53.718301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.718315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.262 [2024-12-05 11:16:53.718323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d4cd90) 00:33:29.262 [2024-12-05 11:16:53.718336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.262 [2024-12-05 11:16:53.718361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8dc00, cid 4, qid 0 00:33:29.262 [2024-12-05 11:16:53.718367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8dd80, cid 5, qid 0 00:33:29.262 [2024-12-05 11:16:53.718481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.262 [2024-12-05 11:16:53.718487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.262 [2024-12-05 11:16:53.718491] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718495] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4cd90): datao=0, datal=1024, cccid=4 00:33:29.262 [2024-12-05 11:16:53.718500] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8dc00) on tqpair(0x1d4cd90): expected_datao=0, payload_size=1024 00:33:29.262 [2024-12-05 11:16:53.718505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718511] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.262 [2024-12-05 11:16:53.718515] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.718520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.263 [2024-12-05 11:16:53.718525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.263 [2024-12-05 11:16:53.718529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.718533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8dd80) on tqpair=0x1d4cd90 00:33:29.263 [2024-12-05 11:16:53.759632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.263 [2024-12-05 11:16:53.759654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.263 [2024-12-05 11:16:53.759658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8dc00) on tqpair=0x1d4cd90 00:33:29.263 [2024-12-05 11:16:53.759682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4cd90) 00:33:29.263 [2024-12-05 11:16:53.759696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.263 [2024-12-05 11:16:53.759730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8dc00, cid 4, qid 0 00:33:29.263 [2024-12-05 11:16:53.759801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.263 [2024-12-05 11:16:53.759807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.263 [2024-12-05 11:16:53.759811] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759815] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4cd90): datao=0, datal=3072, cccid=4 00:33:29.263 [2024-12-05 11:16:53.759820] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8dc00) on tqpair(0x1d4cd90): expected_datao=0, payload_size=3072 00:33:29.263 [2024-12-05 11:16:53.759825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759833] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759837] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.263 [2024-12-05 11:16:53.759850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.263 [2024-12-05 11:16:53.759855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8dc00) on tqpair=0x1d4cd90 00:33:29.263 [2024-12-05 11:16:53.759868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d4cd90) 00:33:29.263 [2024-12-05 11:16:53.759879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.263 [2024-12-05 11:16:53.759900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8dc00, cid 4, qid 0 00:33:29.263 [2024-12-05 11:16:53.759960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.263 [2024-12-05 11:16:53.759966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.263 [2024-12-05 11:16:53.759969] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759973] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d4cd90): datao=0, datal=8, cccid=4 00:33:29.263 [2024-12-05 11:16:53.759978] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d8dc00) on tqpair(0x1d4cd90): expected_datao=0, payload_size=8 00:33:29.263 [2024-12-05 11:16:53.759983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759989] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.759992] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.263 ===================================================== 00:33:29.263 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:29.263 ===================================================== 00:33:29.263 Controller Capabilities/Features 00:33:29.263 ================================ 00:33:29.263 Vendor ID: 0000 00:33:29.263 Subsystem Vendor ID: 0000 00:33:29.263 Serial Number: .................... 00:33:29.263 Model Number: ........................................ 00:33:29.263 Firmware Version: 25.01 00:33:29.263 Recommended Arb Burst: 0 00:33:29.263 IEEE OUI Identifier: 00 00 00 00:33:29.263 Multi-path I/O 00:33:29.263 May have multiple subsystem ports: No 00:33:29.263 May have multiple controllers: No 00:33:29.263 Associated with SR-IOV VF: No 00:33:29.263 Max Data Transfer Size: 131072 00:33:29.263 Max Number of Namespaces: 0 00:33:29.263 Max Number of I/O Queues: 1024 00:33:29.263 NVMe Specification Version (VS): 1.3 00:33:29.263 NVMe Specification Version (Identify): 1.3 00:33:29.263 Maximum Queue Entries: 128 00:33:29.263 Contiguous Queues Required: Yes 00:33:29.263 Arbitration Mechanisms Supported 00:33:29.263 Weighted Round Robin: Not Supported 00:33:29.263 Vendor Specific: Not Supported 00:33:29.263 Reset Timeout: 15000 ms 00:33:29.263 Doorbell Stride: 4 bytes 00:33:29.263 NVM Subsystem Reset: Not Supported 00:33:29.263 Command Sets Supported 00:33:29.263 NVM Command Set: Supported 00:33:29.263 Boot Partition: Not Supported 00:33:29.263 Memory Page Size Minimum: 4096 bytes 00:33:29.263 Memory Page Size Maximum: 4096 bytes 00:33:29.263 Persistent Memory Region: Not Supported 00:33:29.263 Optional Asynchronous Events Supported 00:33:29.263 Namespace Attribute Notices: Not Supported 00:33:29.263 Firmware Activation Notices: Not Supported 00:33:29.263 ANA Change Notices: Not Supported 00:33:29.263 PLE Aggregate Log Change Notices: Not Supported 00:33:29.263 LBA Status Info Alert Notices: Not Supported 00:33:29.263 EGE Aggregate Log Change Notices: Not Supported 00:33:29.263 Normal NVM Subsystem Shutdown event: Not Supported 00:33:29.263 Zone Descriptor Change Notices: Not Supported 00:33:29.263 Discovery Log Change Notices: Supported 00:33:29.263 Controller Attributes 00:33:29.263 128-bit Host Identifier: Not Supported 00:33:29.263 Non-Operational Permissive Mode: Not Supported 00:33:29.263 NVM Sets: Not Supported 00:33:29.263 Read Recovery Levels: Not Supported 00:33:29.263 Endurance Groups: Not Supported 00:33:29.263 Predictable Latency Mode: Not Supported 00:33:29.263 Traffic Based Keep ALive: Not Supported 00:33:29.263 Namespace Granularity: Not Supported 00:33:29.263 SQ Associations: Not Supported 00:33:29.263 UUID List: Not Supported 00:33:29.263 Multi-Domain Subsystem: Not Supported 00:33:29.263 Fixed Capacity Management: Not Supported 00:33:29.263 Variable Capacity Management: Not Supported 00:33:29.263 Delete Endurance Group: Not Supported 00:33:29.263 Delete NVM Set: Not Supported 00:33:29.263 Extended LBA Formats Supported: Not Supported 00:33:29.263 Flexible Data Placement Supported: Not Supported 00:33:29.263 00:33:29.263 Controller Memory Buffer Support 00:33:29.263 ================================ 00:33:29.263 Supported: No 00:33:29.263 00:33:29.263 Persistent Memory Region Support 00:33:29.263 ================================ 00:33:29.263 Supported: No 00:33:29.263 00:33:29.263 Admin Command Set Attributes 00:33:29.263 ============================ 00:33:29.263 Security Send/Receive: Not Supported 00:33:29.263 Format NVM: Not Supported 00:33:29.263 Firmware Activate/Download: Not Supported 00:33:29.263 Namespace Management: Not Supported 00:33:29.263 Device Self-Test: Not Supported 00:33:29.263 Directives: Not Supported 00:33:29.263 NVMe-MI: Not Supported 00:33:29.263 Virtualization Management: Not Supported 00:33:29.263 Doorbell Buffer Config: Not Supported 00:33:29.263 Get LBA Status Capability: Not Supported 00:33:29.263 Command & Feature Lockdown Capability: Not Supported 00:33:29.263 Abort Command Limit: 1 00:33:29.263 Async Event Request Limit: 4 00:33:29.263 Number of Firmware Slots: N/A 00:33:29.263 Firmware Slot 1 Read-Only: N/A 00:33:29.263 Firm[2024-12-05 11:16:53.805620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.263 [2024-12-05 11:16:53.805643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.263 [2024-12-05 11:16:53.805647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.263 [2024-12-05 11:16:53.805652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8dc00) on tqpair=0x1d4cd90 00:33:29.263 ware Activation Without Reset: N/A 00:33:29.263 Multiple Update Detection Support: N/A 00:33:29.263 Firmware Update Granularity: No Information Provided 00:33:29.263 Per-Namespace SMART Log: No 00:33:29.263 Asymmetric Namespace Access Log Page: Not Supported 00:33:29.263 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:29.263 Command Effects Log Page: Not Supported 00:33:29.263 Get Log Page Extended Data: Supported 00:33:29.263 Telemetry Log Pages: Not Supported 00:33:29.263 Persistent Event Log Pages: Not Supported 00:33:29.263 Supported Log Pages Log Page: May Support 00:33:29.263 Commands Supported & Effects Log Page: Not Supported 00:33:29.263 Feature Identifiers & Effects Log Page:May Support 00:33:29.263 NVMe-MI Commands & Effects Log Page: May Support 00:33:29.263 Data Area 4 for Telemetry Log: Not Supported 00:33:29.263 Error Log Page Entries Supported: 128 00:33:29.263 Keep Alive: Not Supported 00:33:29.263 00:33:29.263 NVM Command Set Attributes 00:33:29.263 ========================== 00:33:29.263 Submission Queue Entry Size 00:33:29.263 Max: 1 00:33:29.263 Min: 1 00:33:29.263 Completion Queue Entry Size 00:33:29.263 Max: 1 00:33:29.263 Min: 1 00:33:29.263 Number of Namespaces: 0 00:33:29.263 Compare Command: Not Supported 00:33:29.263 Write Uncorrectable Command: Not Supported 00:33:29.263 Dataset Management Command: Not Supported 00:33:29.263 Write Zeroes Command: Not Supported 00:33:29.263 Set Features Save Field: Not Supported 00:33:29.264 Reservations: Not Supported 00:33:29.264 Timestamp: Not Supported 00:33:29.264 Copy: Not Supported 00:33:29.264 Volatile Write Cache: Not Present 00:33:29.264 Atomic Write Unit (Normal): 1 00:33:29.264 Atomic Write Unit (PFail): 1 00:33:29.264 Atomic Compare & Write Unit: 1 00:33:29.264 Fused Compare & Write: Supported 00:33:29.264 Scatter-Gather List 00:33:29.264 SGL Command Set: Supported 00:33:29.264 SGL Keyed: Supported 00:33:29.264 SGL Bit Bucket Descriptor: Not Supported 00:33:29.264 SGL Metadata Pointer: Not Supported 00:33:29.264 Oversized SGL: Not Supported 00:33:29.264 SGL Metadata Address: Not Supported 00:33:29.264 SGL Offset: Supported 00:33:29.264 Transport SGL Data Block: Not Supported 00:33:29.264 Replay Protected Memory Block: Not Supported 00:33:29.264 00:33:29.264 Firmware Slot Information 00:33:29.264 ========================= 00:33:29.264 Active slot: 0 00:33:29.264 00:33:29.264 00:33:29.264 Error Log 00:33:29.264 ========= 00:33:29.264 00:33:29.264 Active Namespaces 00:33:29.264 ================= 00:33:29.264 Discovery Log Page 00:33:29.264 ================== 00:33:29.264 Generation Counter: 2 00:33:29.264 Number of Records: 2 00:33:29.264 Record Format: 0 00:33:29.264 00:33:29.264 Discovery Log Entry 0 00:33:29.264 ---------------------- 00:33:29.264 Transport Type: 3 (TCP) 00:33:29.264 Address Family: 1 (IPv4) 00:33:29.264 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:29.264 Entry Flags: 00:33:29.264 Duplicate Returned Information: 1 00:33:29.264 Explicit Persistent Connection Support for Discovery: 1 00:33:29.264 Transport Requirements: 00:33:29.264 Secure Channel: Not Required 00:33:29.264 Port ID: 0 (0x0000) 00:33:29.264 Controller ID: 65535 (0xffff) 00:33:29.264 Admin Max SQ Size: 128 00:33:29.264 Transport Service Identifier: 4420 00:33:29.264 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:29.264 Transport Address: 10.0.0.2 00:33:29.264 Discovery Log Entry 1 00:33:29.264 ---------------------- 00:33:29.264 Transport Type: 3 (TCP) 00:33:29.264 Address Family: 1 (IPv4) 00:33:29.264 Subsystem Type: 2 (NVM Subsystem) 00:33:29.264 Entry Flags: 00:33:29.264 Duplicate Returned Information: 0 00:33:29.264 Explicit Persistent Connection Support for Discovery: 0 00:33:29.264 Transport Requirements: 00:33:29.264 Secure Channel: Not Required 00:33:29.264 Port ID: 0 (0x0000) 00:33:29.264 Controller ID: 65535 (0xffff) 00:33:29.264 Admin Max SQ Size: 128 00:33:29.264 Transport Service Identifier: 4420 00:33:29.264 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:33:29.264 Transport Address: 10.0.0.2 [2024-12-05 11:16:53.805825] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:33:29.264 [2024-12-05 11:16:53.805839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d600) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.805847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.264 [2024-12-05 11:16:53.805854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d780) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.805859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.264 [2024-12-05 11:16:53.805866] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8d900) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.805871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.264 [2024-12-05 11:16:53.805877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.805882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.264 [2024-12-05 11:16:53.805900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.805905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.805909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.264 [2024-12-05 11:16:53.805918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.264 [2024-12-05 11:16:53.805944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.264 [2024-12-05 11:16:53.805998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.264 [2024-12-05 11:16:53.806003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.264 [2024-12-05 11:16:53.806007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.806019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.264 [2024-12-05 11:16:53.806032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.264 [2024-12-05 11:16:53.806051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.264 [2024-12-05 11:16:53.806112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.264 [2024-12-05 11:16:53.806118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.264 [2024-12-05 11:16:53.806122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.806131] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:33:29.264 [2024-12-05 11:16:53.806136] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:33:29.264 [2024-12-05 11:16:53.806145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.264 [2024-12-05 11:16:53.806158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.264 [2024-12-05 11:16:53.806173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.264 [2024-12-05 11:16:53.806213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.264 [2024-12-05 11:16:53.806219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.264 [2024-12-05 11:16:53.806222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.806236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.264 [2024-12-05 11:16:53.806250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.264 [2024-12-05 11:16:53.806264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.264 [2024-12-05 11:16:53.806317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.264 [2024-12-05 11:16:53.806323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.264 [2024-12-05 11:16:53.806327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.806340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.264 [2024-12-05 11:16:53.806354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.264 [2024-12-05 11:16:53.806368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.264 [2024-12-05 11:16:53.806420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.264 [2024-12-05 11:16:53.806426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.264 [2024-12-05 11:16:53.806429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.806442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.264 [2024-12-05 11:16:53.806456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.264 [2024-12-05 11:16:53.806470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.264 [2024-12-05 11:16:53.806521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.264 [2024-12-05 11:16:53.806526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.264 [2024-12-05 11:16:53.806530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.264 [2024-12-05 11:16:53.806542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806546] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.264 [2024-12-05 11:16:53.806550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.264 [2024-12-05 11:16:53.806557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.264 [2024-12-05 11:16:53.806571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.806630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.806637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.806640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.806653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.806667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.806693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.806742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.806747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.806751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.806763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.806777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.806792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.806851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.806857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.806861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.806873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.806887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.806901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.806963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.806968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.806972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.806984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.806992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.806999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.807013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.807063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.807068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.807072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.807084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.807097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.807114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.807179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.807185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.807189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.807201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.807214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.807228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.807281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.807286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.807290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.807302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.807317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.807331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.807381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.807387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.807390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.807402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.807416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.807430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.807477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.807483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.807486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.807498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.807512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.265 [2024-12-05 11:16:53.807526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.265 [2024-12-05 11:16:53.807583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.265 [2024-12-05 11:16:53.807597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.265 [2024-12-05 11:16:53.807601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.265 [2024-12-05 11:16:53.807614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.265 [2024-12-05 11:16:53.807622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.265 [2024-12-05 11:16:53.807628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.807643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.807690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.807697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.807700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.807713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.807727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.807742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.807808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.807813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.807817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.807829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.807843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.807858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.807920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.807926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.807930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.807942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.807950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.807956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.807970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.266 [2024-12-05 11:16:53.808856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.266 [2024-12-05 11:16:53.808910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.266 [2024-12-05 11:16:53.808916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.266 [2024-12-05 11:16:53.808920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.266 [2024-12-05 11:16:53.808932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.266 [2024-12-05 11:16:53.808940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.266 [2024-12-05 11:16:53.808946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.267 [2024-12-05 11:16:53.808961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.267 [2024-12-05 11:16:53.809023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.267 [2024-12-05 11:16:53.809029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.267 [2024-12-05 11:16:53.809032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.267 [2024-12-05 11:16:53.809045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.267 [2024-12-05 11:16:53.809059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.267 [2024-12-05 11:16:53.809074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.267 [2024-12-05 11:16:53.809128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.267 [2024-12-05 11:16:53.809134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.267 [2024-12-05 11:16:53.809138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.267 [2024-12-05 11:16:53.809151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.267 [2024-12-05 11:16:53.809165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.267 [2024-12-05 11:16:53.809179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.267 [2024-12-05 11:16:53.809235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.267 [2024-12-05 11:16:53.809241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.267 [2024-12-05 11:16:53.809245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.267 [2024-12-05 11:16:53.809258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.267 [2024-12-05 11:16:53.809271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.267 [2024-12-05 11:16:53.809286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.267 [2024-12-05 11:16:53.809348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.267 [2024-12-05 11:16:53.809354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.267 [2024-12-05 11:16:53.809358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.267 [2024-12-05 11:16:53.809370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809374] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.267 [2024-12-05 11:16:53.809384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.267 [2024-12-05 11:16:53.809398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.267 [2024-12-05 11:16:53.809450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.267 [2024-12-05 11:16:53.809455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.267 [2024-12-05 11:16:53.809459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.267 [2024-12-05 11:16:53.809472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.267 [2024-12-05 11:16:53.809486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.267 [2024-12-05 11:16:53.809500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.267 [2024-12-05 11:16:53.809552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.267 [2024-12-05 11:16:53.809558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.267 [2024-12-05 11:16:53.809562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.267 [2024-12-05 11:16:53.809575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.809583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d4cd90) 00:33:29.267 [2024-12-05 11:16:53.813602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.267 [2024-12-05 11:16:53.813635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d8da80, cid 3, qid 0 00:33:29.267 [2024-12-05 11:16:53.813693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.267 [2024-12-05 11:16:53.813700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.267 [2024-12-05 11:16:53.813704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.267 [2024-12-05 11:16:53.813708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d8da80) on tqpair=0x1d4cd90 00:33:29.267 [2024-12-05 11:16:53.813717] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:33:29.267 00:33:29.267 11:16:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:33:29.267 [2024-12-05 11:16:53.853535] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:29.267 [2024-12-05 11:16:53.853618] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87622 ] 00:33:29.531 [2024-12-05 11:16:54.011632] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:33:29.531 [2024-12-05 11:16:54.011711] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:29.531 [2024-12-05 11:16:54.011717] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:29.531 [2024-12-05 11:16:54.011739] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:29.531 [2024-12-05 11:16:54.011755] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:29.531 [2024-12-05 11:16:54.012175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:33:29.531 [2024-12-05 11:16:54.012223] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xac7d90 0 00:33:29.531 [2024-12-05 11:16:54.026612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:29.531 [2024-12-05 11:16:54.026637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:29.531 [2024-12-05 11:16:54.026643] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:29.531 [2024-12-05 11:16:54.026646] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:29.531 [2024-12-05 11:16:54.026691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.026697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.026702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.531 [2024-12-05 11:16:54.026719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:29.531 [2024-12-05 11:16:54.026747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.531 [2024-12-05 11:16:54.033029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.531 [2024-12-05 11:16:54.033047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.531 [2024-12-05 11:16:54.033052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.531 [2024-12-05 11:16:54.033068] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:29.531 [2024-12-05 11:16:54.033077] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:33:29.531 [2024-12-05 11:16:54.033084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:33:29.531 [2024-12-05 11:16:54.033106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.531 [2024-12-05 11:16:54.033124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.531 [2024-12-05 11:16:54.033149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.531 [2024-12-05 11:16:54.033224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.531 [2024-12-05 11:16:54.033231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.531 [2024-12-05 11:16:54.033235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.531 [2024-12-05 11:16:54.033245] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:33:29.531 [2024-12-05 11:16:54.033252] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:33:29.531 [2024-12-05 11:16:54.033259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.531 [2024-12-05 11:16:54.033274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.531 [2024-12-05 11:16:54.033289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.531 [2024-12-05 11:16:54.033344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.531 [2024-12-05 11:16:54.033350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.531 [2024-12-05 11:16:54.033354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.531 [2024-12-05 11:16:54.033364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:33:29.531 [2024-12-05 11:16:54.033372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:33:29.531 [2024-12-05 11:16:54.033379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.531 [2024-12-05 11:16:54.033393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.531 [2024-12-05 11:16:54.033407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.531 [2024-12-05 11:16:54.033468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.531 [2024-12-05 11:16:54.033474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.531 [2024-12-05 11:16:54.033477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.531 [2024-12-05 11:16:54.033481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.532 [2024-12-05 11:16:54.033487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:29.532 [2024-12-05 11:16:54.033498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.033512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.532 [2024-12-05 11:16:54.033525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.532 [2024-12-05 11:16:54.033580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.532 [2024-12-05 11:16:54.033586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.532 [2024-12-05 11:16:54.033601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.532 [2024-12-05 11:16:54.033610] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:33:29.532 [2024-12-05 11:16:54.033616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:33:29.532 [2024-12-05 11:16:54.033623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:29.532 [2024-12-05 11:16:54.033734] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:33:29.532 [2024-12-05 11:16:54.033741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:29.532 [2024-12-05 11:16:54.033751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.033765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.532 [2024-12-05 11:16:54.033780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.532 [2024-12-05 11:16:54.033842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.532 [2024-12-05 11:16:54.033848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.532 [2024-12-05 11:16:54.033852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.532 [2024-12-05 11:16:54.033861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:29.532 [2024-12-05 11:16:54.033870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.033884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.532 [2024-12-05 11:16:54.033898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.532 [2024-12-05 11:16:54.033962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.532 [2024-12-05 11:16:54.033968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.532 [2024-12-05 11:16:54.033972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.033975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.532 [2024-12-05 11:16:54.033980] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:29.532 [2024-12-05 11:16:54.033985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:33:29.532 [2024-12-05 11:16:54.033993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:33:29.532 [2024-12-05 11:16:54.034003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:33:29.532 [2024-12-05 11:16:54.034013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.034023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.532 [2024-12-05 11:16:54.034037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.532 [2024-12-05 11:16:54.034165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.532 [2024-12-05 11:16:54.034171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.532 [2024-12-05 11:16:54.034175] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034180] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=4096, cccid=0 00:33:29.532 [2024-12-05 11:16:54.034185] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08600) on tqpair(0xac7d90): expected_datao=0, payload_size=4096 00:33:29.532 [2024-12-05 11:16:54.034191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034199] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034204] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.532 [2024-12-05 11:16:54.034218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.532 [2024-12-05 11:16:54.034222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.532 [2024-12-05 11:16:54.034235] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:33:29.532 [2024-12-05 11:16:54.034242] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:33:29.532 [2024-12-05 11:16:54.034247] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:33:29.532 [2024-12-05 11:16:54.034256] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:33:29.532 [2024-12-05 11:16:54.034262] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:33:29.532 [2024-12-05 11:16:54.034267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:33:29.532 [2024-12-05 11:16:54.034277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:33:29.532 [2024-12-05 11:16:54.034284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.034299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:29.532 [2024-12-05 11:16:54.034313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.532 [2024-12-05 11:16:54.034385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.532 [2024-12-05 11:16:54.034391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.532 [2024-12-05 11:16:54.034395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.532 [2024-12-05 11:16:54.034407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.034421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.532 [2024-12-05 11:16:54.034429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.034442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.532 [2024-12-05 11:16:54.034448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.034462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.532 [2024-12-05 11:16:54.034468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.034482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.532 [2024-12-05 11:16:54.034487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:29.532 [2024-12-05 11:16:54.034495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:29.532 [2024-12-05 11:16:54.034502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.532 [2024-12-05 11:16:54.034505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac7d90) 00:33:29.532 [2024-12-05 11:16:54.034512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.532 [2024-12-05 11:16:54.034531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08600, cid 0, qid 0 00:33:29.532 [2024-12-05 11:16:54.034537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08780, cid 1, qid 0 00:33:29.532 [2024-12-05 11:16:54.034541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08900, cid 2, qid 0 00:33:29.532 [2024-12-05 11:16:54.034546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.532 [2024-12-05 11:16:54.034551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08c00, cid 4, qid 0 00:33:29.532 [2024-12-05 11:16:54.034660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.532 [2024-12-05 11:16:54.034666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.532 [2024-12-05 11:16:54.034670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.034675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08c00) on tqpair=0xac7d90 00:33:29.533 [2024-12-05 11:16:54.034680] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:33:29.533 [2024-12-05 11:16:54.034686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.034695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.034702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.034708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.034712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.034716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac7d90) 00:33:29.533 [2024-12-05 11:16:54.034722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:29.533 [2024-12-05 11:16:54.034737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08c00, cid 4, qid 0 00:33:29.533 [2024-12-05 11:16:54.034797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.533 [2024-12-05 11:16:54.034803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.533 [2024-12-05 11:16:54.034807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.034811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08c00) on tqpair=0xac7d90 00:33:29.533 [2024-12-05 11:16:54.034868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.034877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.034886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.034890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac7d90) 00:33:29.533 [2024-12-05 11:16:54.034896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.533 [2024-12-05 11:16:54.034910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08c00, cid 4, qid 0 00:33:29.533 [2024-12-05 11:16:54.034985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.533 [2024-12-05 11:16:54.034991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.533 [2024-12-05 11:16:54.034995] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.034999] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=4096, cccid=4 00:33:29.533 [2024-12-05 11:16:54.035005] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08c00) on tqpair(0xac7d90): expected_datao=0, payload_size=4096 00:33:29.533 [2024-12-05 11:16:54.035010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035017] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035021] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.533 [2024-12-05 11:16:54.035034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.533 [2024-12-05 11:16:54.035038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08c00) on tqpair=0xac7d90 00:33:29.533 [2024-12-05 11:16:54.035053] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:33:29.533 [2024-12-05 11:16:54.035066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac7d90) 00:33:29.533 [2024-12-05 11:16:54.035093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.533 [2024-12-05 11:16:54.035108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08c00, cid 4, qid 0 00:33:29.533 [2024-12-05 11:16:54.035203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.533 [2024-12-05 11:16:54.035209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.533 [2024-12-05 11:16:54.035213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035217] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=4096, cccid=4 00:33:29.533 [2024-12-05 11:16:54.035222] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08c00) on tqpair(0xac7d90): expected_datao=0, payload_size=4096 00:33:29.533 [2024-12-05 11:16:54.035227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035237] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.533 [2024-12-05 11:16:54.035251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.533 [2024-12-05 11:16:54.035255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08c00) on tqpair=0xac7d90 00:33:29.533 [2024-12-05 11:16:54.035280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac7d90) 00:33:29.533 [2024-12-05 11:16:54.035308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.533 [2024-12-05 11:16:54.035322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08c00, cid 4, qid 0 00:33:29.533 [2024-12-05 11:16:54.035394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.533 [2024-12-05 11:16:54.035402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.533 [2024-12-05 11:16:54.035406] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=4096, cccid=4 00:33:29.533 [2024-12-05 11:16:54.035415] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08c00) on tqpair(0xac7d90): expected_datao=0, payload_size=4096 00:33:29.533 [2024-12-05 11:16:54.035421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035427] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035431] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.533 [2024-12-05 11:16:54.035445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.533 [2024-12-05 11:16:54.035449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08c00) on tqpair=0xac7d90 00:33:29.533 [2024-12-05 11:16:54.035460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035508] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:33:29.533 [2024-12-05 11:16:54.035513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:33:29.533 [2024-12-05 11:16:54.035519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:33:29.533 [2024-12-05 11:16:54.035539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac7d90) 00:33:29.533 [2024-12-05 11:16:54.035550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.533 [2024-12-05 11:16:54.035558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xac7d90) 00:33:29.533 [2024-12-05 11:16:54.035572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:29.533 [2024-12-05 11:16:54.035602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08c00, cid 4, qid 0 00:33:29.533 [2024-12-05 11:16:54.035608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08d80, cid 5, qid 0 00:33:29.533 [2024-12-05 11:16:54.035686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.533 [2024-12-05 11:16:54.035692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.533 [2024-12-05 11:16:54.035696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08c00) on tqpair=0xac7d90 00:33:29.533 [2024-12-05 11:16:54.035708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.533 [2024-12-05 11:16:54.035713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.533 [2024-12-05 11:16:54.035717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08d80) on tqpair=0xac7d90 00:33:29.533 [2024-12-05 11:16:54.035731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.533 [2024-12-05 11:16:54.035735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xac7d90) 00:33:29.533 [2024-12-05 11:16:54.035741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.534 [2024-12-05 11:16:54.035755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08d80, cid 5, qid 0 00:33:29.534 [2024-12-05 11:16:54.035820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.534 [2024-12-05 11:16:54.035826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.534 [2024-12-05 11:16:54.035830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.035834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08d80) on tqpair=0xac7d90 00:33:29.534 [2024-12-05 11:16:54.035843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.035848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xac7d90) 00:33:29.534 [2024-12-05 11:16:54.035854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.534 [2024-12-05 11:16:54.035867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08d80, cid 5, qid 0 00:33:29.534 [2024-12-05 11:16:54.035931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.534 [2024-12-05 11:16:54.035938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.534 [2024-12-05 11:16:54.035941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.035946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08d80) on tqpair=0xac7d90 00:33:29.534 [2024-12-05 11:16:54.035955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.035959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xac7d90) 00:33:29.534 [2024-12-05 11:16:54.035965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.534 [2024-12-05 11:16:54.035978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08d80, cid 5, qid 0 00:33:29.534 [2024-12-05 11:16:54.036043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.534 [2024-12-05 11:16:54.036049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.534 [2024-12-05 11:16:54.036053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08d80) on tqpair=0xac7d90 00:33:29.534 [2024-12-05 11:16:54.036098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xac7d90) 00:33:29.534 [2024-12-05 11:16:54.036109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.534 [2024-12-05 11:16:54.036116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac7d90) 00:33:29.534 [2024-12-05 11:16:54.036127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.534 [2024-12-05 11:16:54.036135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xac7d90) 00:33:29.534 [2024-12-05 11:16:54.036145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.534 [2024-12-05 11:16:54.036153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xac7d90) 00:33:29.534 [2024-12-05 11:16:54.036163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.534 [2024-12-05 11:16:54.036179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08d80, cid 5, qid 0 00:33:29.534 [2024-12-05 11:16:54.036185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08c00, cid 4, qid 0 00:33:29.534 [2024-12-05 11:16:54.036189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08f00, cid 6, qid 0 00:33:29.534 [2024-12-05 11:16:54.036194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09080, cid 7, qid 0 00:33:29.534 [2024-12-05 11:16:54.036328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.534 [2024-12-05 11:16:54.036342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.534 [2024-12-05 11:16:54.036347] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036351] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=8192, cccid=5 00:33:29.534 [2024-12-05 11:16:54.036356] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08d80) on tqpair(0xac7d90): expected_datao=0, payload_size=8192 00:33:29.534 [2024-12-05 11:16:54.036362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036379] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036384] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.534 [2024-12-05 11:16:54.036396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.534 [2024-12-05 11:16:54.036399] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036403] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=512, cccid=4 00:33:29.534 [2024-12-05 11:16:54.036409] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08c00) on tqpair(0xac7d90): expected_datao=0, payload_size=512 00:33:29.534 [2024-12-05 11:16:54.036414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036420] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036423] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.534 [2024-12-05 11:16:54.036437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.534 [2024-12-05 11:16:54.036441] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036444] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=512, cccid=6 00:33:29.534 [2024-12-05 11:16:54.036449] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb08f00) on tqpair(0xac7d90): expected_datao=0, payload_size=512 00:33:29.534 [2024-12-05 11:16:54.036454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036461] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036464] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:29.534 [2024-12-05 11:16:54.036475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:29.534 [2024-12-05 11:16:54.036479] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036483] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac7d90): datao=0, datal=4096, cccid=7 00:33:29.534 [2024-12-05 11:16:54.036487] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb09080) on tqpair(0xac7d90): expected_datao=0, payload_size=4096 00:33:29.534 [2024-12-05 11:16:54.036492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036498] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036502] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.534 [2024-12-05 11:16:54.036515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.534 [2024-12-05 11:16:54.036519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08d80) on tqpair=0xac7d90 00:33:29.534 [2024-12-05 11:16:54.036539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.534 [2024-12-05 11:16:54.036545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.534 [2024-12-05 11:16:54.036549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08c00) on tqpair=0xac7d90 00:33:29.534 [2024-12-05 11:16:54.036567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.534 [2024-12-05 11:16:54.036573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.534 [2024-12-05 11:16:54.036577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.534 [2024-12-05 11:16:54.036581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08f00) on tqpair=0xac7d90 00:33:29.534 ===================================================== 00:33:29.534 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:29.534 ===================================================== 00:33:29.534 Controller Capabilities/Features 00:33:29.534 ================================ 00:33:29.534 Vendor ID: 8086 00:33:29.534 Subsystem Vendor ID: 8086 00:33:29.534 Serial Number: SPDK00000000000001 00:33:29.534 Model Number: SPDK bdev Controller 00:33:29.534 Firmware Version: 25.01 00:33:29.534 Recommended Arb Burst: 6 00:33:29.534 IEEE OUI Identifier: e4 d2 5c 00:33:29.534 Multi-path I/O 00:33:29.534 May have multiple subsystem ports: Yes 00:33:29.534 May have multiple controllers: Yes 00:33:29.534 Associated with SR-IOV VF: No 00:33:29.534 Max Data Transfer Size: 131072 00:33:29.534 Max Number of Namespaces: 32 00:33:29.534 Max Number of I/O Queues: 127 00:33:29.534 NVMe Specification Version (VS): 1.3 00:33:29.534 NVMe Specification Version (Identify): 1.3 00:33:29.534 Maximum Queue Entries: 128 00:33:29.534 Contiguous Queues Required: Yes 00:33:29.534 Arbitration Mechanisms Supported 00:33:29.534 Weighted Round Robin: Not Supported 00:33:29.534 Vendor Specific: Not Supported 00:33:29.534 Reset Timeout: 15000 ms 00:33:29.534 Doorbell Stride: 4 bytes 00:33:29.534 NVM Subsystem Reset: Not Supported 00:33:29.534 Command Sets Supported 00:33:29.534 NVM Command Set: Supported 00:33:29.534 Boot Partition: Not Supported 00:33:29.534 Memory Page Size Minimum: 4096 bytes 00:33:29.534 Memory Page Size Maximum: 4096 bytes 00:33:29.534 Persistent Memory Region: Not Supported 00:33:29.534 Optional Asynchronous Events Supported 00:33:29.534 Namespace Attribute Notices: Supported 00:33:29.534 Firmware Activation Notices: Not Supported 00:33:29.534 ANA Change Notices: Not Supported 00:33:29.534 PLE Aggregate Log Change Notices: Not Supported 00:33:29.534 LBA Status Info Alert Notices: Not Supported 00:33:29.534 EGE Aggregate Log Change Notices: Not Supported 00:33:29.534 Normal NVM Subsystem Shutdown event: Not Supported 00:33:29.534 Zone Descriptor Change Notices: Not Supported 00:33:29.534 Discovery Log Change Notices: Not Supported 00:33:29.534 Controller Attributes 00:33:29.535 128-bit Host Identifier: Supported 00:33:29.535 Non-Operational Permissive Mode: Not Supported 00:33:29.535 NVM Sets: Not Supported 00:33:29.535 Read Recovery Levels: Not Supported 00:33:29.535 Endurance Groups: Not Supported 00:33:29.535 Predictable Latency Mode: Not Supported 00:33:29.535 Traffic Based Keep ALive: Not Supported 00:33:29.535 Namespace Granularity: Not Supported 00:33:29.535 SQ Associations: Not Supported 00:33:29.535 UUID List: Not Supported 00:33:29.535 Multi-Domain Subsystem: Not Supported 00:33:29.535 Fixed Capacity Management: Not Supported 00:33:29.535 Variable Capacity Management: Not Supported 00:33:29.535 Delete Endurance Group: Not Supported 00:33:29.535 Delete NVM Set: Not Supported 00:33:29.535 Extended LBA Formats Supported: Not Supported 00:33:29.535 Flexible Data Placement Supported: Not Supported 00:33:29.535 00:33:29.535 Controller Memory Buffer Support 00:33:29.535 ================================ 00:33:29.535 Supported: No 00:33:29.535 00:33:29.535 Persistent Memory Region Support 00:33:29.535 ================================ 00:33:29.535 Supported: No 00:33:29.535 00:33:29.535 Admin Command Set Attributes 00:33:29.535 ============================ 00:33:29.535 Security Send/Receive: Not Supported 00:33:29.535 Format NVM: Not Supported 00:33:29.535 Firmware Activate/Download: Not Supported 00:33:29.535 Namespace Management: Not Supported 00:33:29.535 Device Self-Test: Not Supported 00:33:29.535 Directives: Not Supported 00:33:29.535 NVMe-MI: Not Supported 00:33:29.535 Virtualization Management: Not Supported 00:33:29.535 Doorbell Buffer Config: Not Supported 00:33:29.535 Get LBA Status Capability: Not Supported 00:33:29.535 Command & Feature Lockdown Capability: Not Supported 00:33:29.535 Abort Command Limit: 4 00:33:29.535 Async Event Request Limit: 4 00:33:29.535 Number of Firmware Slots: N/A 00:33:29.535 Firmware Slot 1 Read-Only: N/A 00:33:29.535 Firmware Activation Without Reset: N/A 00:33:29.535 Multiple Update Detection Support: N/A 00:33:29.535 Firmware Update Granularity: No Information Provided 00:33:29.535 Per-Namespace SMART Log: No 00:33:29.535 Asymmetric Namespace Access Log Page: Not Supported 00:33:29.535 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:33:29.535 Command Effects Log Page: Supported 00:33:29.535 Get Log Page Extended Data: Supported 00:33:29.535 Telemetry Log Pages: Not Supported 00:33:29.535 Persistent Event Log Pages: Not Supported 00:33:29.535 Supported Log Pages Log Page: May Support 00:33:29.535 Commands Supported & Effects Log Page: Not Supported 00:33:29.535 Feature Identifiers & Effects Log Page:May Support 00:33:29.535 NVMe-MI Commands & Effects Log Page: May Support 00:33:29.535 Data Area 4 for Telemetry Log: Not Supported 00:33:29.535 Error Log Page Entries Supported: 128 00:33:29.535 Keep Alive: Supported 00:33:29.535 Keep Alive Granularity: 10000 ms 00:33:29.535 00:33:29.535 NVM Command Set Attributes 00:33:29.535 ========================== 00:33:29.535 Submission Queue Entry Size 00:33:29.535 Max: 64 00:33:29.535 Min: 64 00:33:29.535 Completion Queue Entry Size 00:33:29.535 Max: 16 00:33:29.535 Min: 16 00:33:29.535 Number of Namespaces: 32 00:33:29.535 Compare Command: Supported 00:33:29.535 Write Uncorrectable Command: Not Supported 00:33:29.535 Dataset Management Command: Supported 00:33:29.535 Write Zeroes Command: Supported 00:33:29.535 Set Features Save Field: Not Supported 00:33:29.535 Reservations: Supported 00:33:29.535 Timestamp: Not Supported 00:33:29.535 Copy: Supported 00:33:29.535 Volatile Write Cache: Present 00:33:29.535 Atomic Write Unit (Normal): 1 00:33:29.535 Atomic Write Unit (PFail): 1 00:33:29.535 Atomic Compare & Write Unit: 1 00:33:29.535 Fused Compare & Write: Supported 00:33:29.535 Scatter-Gather List 00:33:29.535 SGL Command Set: Supported 00:33:29.535 SGL Keyed: Supported 00:33:29.535 SGL Bit Bucket Descriptor: Not Supported 00:33:29.535 SGL Metadata Pointer: Not Supported 00:33:29.535 Oversized SGL: Not Supported 00:33:29.535 SGL Metadata Address: Not Supported 00:33:29.535 SGL Offset: Supported 00:33:29.535 Transport SGL Data Block: Not Supported 00:33:29.535 Replay Protected Memory Block: Not Supported 00:33:29.535 00:33:29.535 Firmware Slot Information 00:33:29.535 ========================= 00:33:29.535 Active slot: 1 00:33:29.535 Slot 1 Firmware Revision: 25.01 00:33:29.535 00:33:29.535 00:33:29.535 Commands Supported and Effects 00:33:29.535 ============================== 00:33:29.535 Admin Commands 00:33:29.535 -------------- 00:33:29.535 Get Log Page (02h): Supported 00:33:29.535 Identify (06h): Supported 00:33:29.535 Abort (08h): Supported 00:33:29.535 Set Features (09h): Supported 00:33:29.535 Get Features (0Ah): Supported 00:33:29.535 Asynchronous Event Request (0Ch): Supported 00:33:29.535 Keep Alive (18h): Supported 00:33:29.535 I/O Commands 00:33:29.535 ------------ 00:33:29.535 Flush (00h): Supported LBA-Change 00:33:29.535 Write (01h): Supported LBA-Change 00:33:29.535 Read (02h): Supported 00:33:29.535 Compare (05h): Supported 00:33:29.535 Write Zeroes (08h): Supported LBA-Change 00:33:29.535 Dataset Management (09h): Supported LBA-Change 00:33:29.535 Copy (19h): Supported LBA-Change 00:33:29.535 00:33:29.535 Error Log 00:33:29.535 ========= 00:33:29.535 00:33:29.535 Arbitration 00:33:29.535 =========== 00:33:29.535 Arbitration Burst: 1 00:33:29.535 00:33:29.535 Power Management 00:33:29.535 ================ 00:33:29.535 Number of Power States: 1 00:33:29.535 Current Power State: Power State #0 00:33:29.535 Power State #0: 00:33:29.535 Max Power: 0.00 W 00:33:29.535 Non-Operational State: Operational 00:33:29.535 Entry Latency: Not Reported 00:33:29.535 Exit Latency: Not Reported 00:33:29.535 Relative Read Throughput: 0 00:33:29.535 Relative Read Latency: 0 00:33:29.535 Relative Write Throughput: 0 00:33:29.535 Relative Write Latency: 0 00:33:29.535 Idle Power: Not Reported 00:33:29.535 Active Power: Not Reported 00:33:29.535 Non-Operational Permissive Mode: Not Supported 00:33:29.535 00:33:29.535 Health Information 00:33:29.535 ================== 00:33:29.535 Critical Warnings: 00:33:29.535 Available Spare Space: OK 00:33:29.535 Temperature: OK 00:33:29.535 Device Reliability: OK 00:33:29.535 Read Only: No 00:33:29.535 Volatile Memory Backup: OK 00:33:29.535 Current Temperature: 0 Kelvin (-273 Celsius) 00:33:29.535 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:33:29.535 Available Spare: 0% 00:33:29.535 Available Spare Threshold: 0% 00:33:29.535 Life Percentage Used:[2024-12-05 11:16:54.041602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.535 [2024-12-05 11:16:54.041618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.535 [2024-12-05 11:16:54.041622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.535 [2024-12-05 11:16:54.041627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09080) on tqpair=0xac7d90 00:33:29.535 [2024-12-05 11:16:54.041739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.535 [2024-12-05 11:16:54.041745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xac7d90) 00:33:29.535 [2024-12-05 11:16:54.041752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.535 [2024-12-05 11:16:54.041775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb09080, cid 7, qid 0 00:33:29.535 [2024-12-05 11:16:54.041905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.535 [2024-12-05 11:16:54.041912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.535 [2024-12-05 11:16:54.041916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.535 [2024-12-05 11:16:54.041921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb09080) on tqpair=0xac7d90 00:33:29.535 [2024-12-05 11:16:54.041962] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:33:29.535 [2024-12-05 11:16:54.041972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08600) on tqpair=0xac7d90 00:33:29.535 [2024-12-05 11:16:54.041980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.535 [2024-12-05 11:16:54.041986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08780) on tqpair=0xac7d90 00:33:29.535 [2024-12-05 11:16:54.041991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.535 [2024-12-05 11:16:54.041996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08900) on tqpair=0xac7d90 00:33:29.535 [2024-12-05 11:16:54.042001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.535 [2024-12-05 11:16:54.042006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.535 [2024-12-05 11:16:54.042011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:29.536 [2024-12-05 11:16:54.042020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042256] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:33:29.536 [2024-12-05 11:16:54.042261] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:33:29.536 [2024-12-05 11:16:54.042270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.042921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.042928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.042931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.042944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.042952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.042958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.042971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.043025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.043036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.043040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.043044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.043053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.043058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.043062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.536 [2024-12-05 11:16:54.043068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.536 [2024-12-05 11:16:54.043082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.536 [2024-12-05 11:16:54.043141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.536 [2024-12-05 11:16:54.043147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.536 [2024-12-05 11:16:54.043151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.536 [2024-12-05 11:16:54.043155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.536 [2024-12-05 11:16:54.043164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.043244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.043250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.043254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.043266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.043350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.043356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.043360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.043373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.043446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.043452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.043456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.043469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.043555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.043562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.043565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.043578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.043687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.043694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.043698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.043711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.043789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.043796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.043800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.043812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.043902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.043908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.043912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.043924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.043933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.043939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.043953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.044012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.044019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.044022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.044035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.044050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.044071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.044133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.044140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.044144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.044158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.044173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.044186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.044245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.044259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.044263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.044276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.044291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.044305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.044372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.044382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.044387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.044400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.537 [2024-12-05 11:16:54.044415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.537 [2024-12-05 11:16:54.044429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.537 [2024-12-05 11:16:54.044478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.537 [2024-12-05 11:16:54.044484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.537 [2024-12-05 11:16:54.044488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.537 [2024-12-05 11:16:54.044493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.537 [2024-12-05 11:16:54.044501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.044516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.044529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.044596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.044602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.044606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.044619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.044633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.044648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.044699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.044705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.044709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.044722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.044736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.044749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.044814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.044820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.044824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.044836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.044850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.044863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.044923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.044933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.044938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.044951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.044959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.044965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.044979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.045047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.045053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.045057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.045070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.045084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.045097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.045160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.045166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.045170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.045182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.045197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.045212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.045267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.045273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.045277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.045290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.045304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.045317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.045368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.045374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.045378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.045391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.045406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.045419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.045465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.045472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.045476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.045489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.045503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.045516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.045565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.045572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.045575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.045579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.048000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.048017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.048021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac7d90) 00:33:29.538 [2024-12-05 11:16:54.048029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.538 [2024-12-05 11:16:54.048050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb08a80, cid 3, qid 0 00:33:29.538 [2024-12-05 11:16:54.048120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:29.538 [2024-12-05 11:16:54.048127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:29.538 [2024-12-05 11:16:54.048131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:29.538 [2024-12-05 11:16:54.048135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb08a80) on tqpair=0xac7d90 00:33:29.538 [2024-12-05 11:16:54.048144] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:33:29.538 0% 00:33:29.538 Data Units Read: 0 00:33:29.538 Data Units Written: 0 00:33:29.538 Host Read Commands: 0 00:33:29.538 Host Write Commands: 0 00:33:29.538 Controller Busy Time: 0 minutes 00:33:29.538 Power Cycles: 0 00:33:29.538 Power On Hours: 0 hours 00:33:29.538 Unsafe Shutdowns: 0 00:33:29.538 Unrecoverable Media Errors: 0 00:33:29.538 Lifetime Error Log Entries: 0 00:33:29.538 Warning Temperature Time: 0 minutes 00:33:29.538 Critical Temperature Time: 0 minutes 00:33:29.538 00:33:29.538 Number of Queues 00:33:29.538 ================ 00:33:29.538 Number of I/O Submission Queues: 127 00:33:29.538 Number of I/O Completion Queues: 127 00:33:29.538 00:33:29.538 Active Namespaces 00:33:29.538 ================= 00:33:29.539 Namespace ID:1 00:33:29.539 Error Recovery Timeout: Unlimited 00:33:29.539 Command Set Identifier: NVM (00h) 00:33:29.539 Deallocate: Supported 00:33:29.539 Deallocated/Unwritten Error: Not Supported 00:33:29.539 Deallocated Read Value: Unknown 00:33:29.539 Deallocate in Write Zeroes: Not Supported 00:33:29.539 Deallocated Guard Field: 0xFFFF 00:33:29.539 Flush: Supported 00:33:29.539 Reservation: Supported 00:33:29.539 Namespace Sharing Capabilities: Multiple Controllers 00:33:29.539 Size (in LBAs): 131072 (0GiB) 00:33:29.539 Capacity (in LBAs): 131072 (0GiB) 00:33:29.539 Utilization (in LBAs): 131072 (0GiB) 00:33:29.539 NGUID: ABCDEF0123456789ABCDEF0123456789 00:33:29.539 EUI64: ABCDEF0123456789 00:33:29.539 UUID: f31bafbb-1517-4170-acc6-ace6583a63c0 00:33:29.539 Thin Provisioning: Not Supported 00:33:29.539 Per-NS Atomic Units: Yes 00:33:29.539 Atomic Boundary Size (Normal): 0 00:33:29.539 Atomic Boundary Size (PFail): 0 00:33:29.539 Atomic Boundary Offset: 0 00:33:29.539 Maximum Single Source Range Length: 65535 00:33:29.539 Maximum Copy Length: 65535 00:33:29.539 Maximum Source Range Count: 1 00:33:29.539 NGUID/EUI64 Never Reused: No 00:33:29.539 Namespace Write Protected: No 00:33:29.539 Number of LBA Formats: 1 00:33:29.539 Current LBA Format: LBA Format #00 00:33:29.539 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:29.539 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:29.539 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:29.539 rmmod nvme_tcp 00:33:29.539 rmmod nvme_fabrics 00:33:29.539 rmmod nvme_keyring 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 87579 ']' 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 87579 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 87579 ']' 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 87579 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87579 00:33:29.798 killing process with pid 87579 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87579' 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 87579 00:33:29.798 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 87579 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:30.057 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:33:30.318 00:33:30.318 real 0m2.616s 00:33:30.318 user 0m5.631s 00:33:30.318 sys 0m1.019s 00:33:30.318 ************************************ 00:33:30.318 END TEST nvmf_identify 00:33:30.318 ************************************ 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.318 ************************************ 00:33:30.318 START TEST nvmf_perf 00:33:30.318 ************************************ 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:33:30.318 * Looking for test storage... 00:33:30.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:33:30.318 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:30.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.578 --rc genhtml_branch_coverage=1 00:33:30.578 --rc genhtml_function_coverage=1 00:33:30.578 --rc genhtml_legend=1 00:33:30.578 --rc geninfo_all_blocks=1 00:33:30.578 --rc geninfo_unexecuted_blocks=1 00:33:30.578 00:33:30.578 ' 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:30.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.578 --rc genhtml_branch_coverage=1 00:33:30.578 --rc genhtml_function_coverage=1 00:33:30.578 --rc genhtml_legend=1 00:33:30.578 --rc geninfo_all_blocks=1 00:33:30.578 --rc geninfo_unexecuted_blocks=1 00:33:30.578 00:33:30.578 ' 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:30.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.578 --rc genhtml_branch_coverage=1 00:33:30.578 --rc genhtml_function_coverage=1 00:33:30.578 --rc genhtml_legend=1 00:33:30.578 --rc geninfo_all_blocks=1 00:33:30.578 --rc geninfo_unexecuted_blocks=1 00:33:30.578 00:33:30.578 ' 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:30.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.578 --rc genhtml_branch_coverage=1 00:33:30.578 --rc genhtml_function_coverage=1 00:33:30.578 --rc genhtml_legend=1 00:33:30.578 --rc geninfo_all_blocks=1 00:33:30.578 --rc geninfo_unexecuted_blocks=1 00:33:30.578 00:33:30.578 ' 00:33:30.578 11:16:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:30.578 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:33:30.578 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.578 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:30.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@223 -- # create_target_ns 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:30.579 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target0 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:30.580 10.0.0.1 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:30.580 10.0.0.2 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:30.580 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target1 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772163 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:30.841 10.0.0.3 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772164 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:30.841 10.0.0.4 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.841 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:30.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:33:30.842 00:33:30.842 --- 10.0.0.1 ping statistics --- 00:33:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.842 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:30.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:33:30.842 00:33:30.842 --- 10.0.0.2 ping statistics --- 00:33:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.842 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:30.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:30.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:33:30.842 00:33:30.842 --- 10.0.0.3 ping statistics --- 00:33:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.842 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:30.842 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:30.843 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:33:30.843 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:33:30.843 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:30.843 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:31.110 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:31.110 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.126 ms 00:33:31.110 00:33:31.110 --- 10.0.0.4 ping statistics --- 00:33:31.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.110 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # return 0 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:31.110 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:31.111 ' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=87857 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 87857 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 87857 ']' 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.111 11:16:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:31.111 [2024-12-05 11:16:55.679220] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:31.111 [2024-12-05 11:16:55.679539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.369 [2024-12-05 11:16:55.842096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.369 [2024-12-05 11:16:55.930419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.369 [2024-12-05 11:16:55.930489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.369 [2024-12-05 11:16:55.930505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.369 [2024-12-05 11:16:55.930519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.369 [2024-12-05 11:16:55.930530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.369 [2024-12-05 11:16:55.932087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.369 [2024-12-05 11:16:55.932277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.369 [2024-12-05 11:16:55.932380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:31.369 [2024-12-05 11:16:55.932484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:32.302 11:16:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:33:32.867 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:33:32.867 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:33:32.867 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:33:32.867 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:33.433 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:33:33.433 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:33:33.433 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:33:33.433 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:33:33.433 11:16:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:33:33.748 [2024-12-05 11:16:58.142064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.748 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:34.005 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:34.005 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:34.262 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:33:34.262 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:34.520 11:16:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.779 [2024-12-05 11:16:59.244128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.779 11:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:35.036 11:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:33:35.036 11:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:33:35.036 11:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:33:35.036 11:16:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:33:35.971 Initializing NVMe Controllers 00:33:35.971 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:33:35.971 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:33:35.971 Initialization complete. Launching workers. 00:33:35.971 ======================================================== 00:33:35.971 Latency(us) 00:33:35.971 Device Information : IOPS MiB/s Average min max 00:33:35.971 PCIE (0000:00:10.0) NSID 1 from core 0: 26848.00 104.88 1191.70 340.37 5403.46 00:33:35.971 ======================================================== 00:33:35.971 Total : 26848.00 104.88 1191.70 340.37 5403.46 00:33:35.971 00:33:35.971 11:17:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:37.348 Initializing NVMe Controllers 00:33:37.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:37.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:37.348 Initialization complete. Launching workers. 00:33:37.348 ======================================================== 00:33:37.348 Latency(us) 00:33:37.348 Device Information : IOPS MiB/s Average min max 00:33:37.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4239.82 16.56 235.64 83.32 7190.27 00:33:37.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.73 5032.70 12026.80 00:33:37.348 ======================================================== 00:33:37.348 Total : 4364.31 17.05 459.86 83.32 12026.80 00:33:37.348 00:33:37.348 11:17:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.756 Initializing NVMe Controllers 00:33:38.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:38.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:38.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:38.756 Initialization complete. Launching workers. 00:33:38.756 ======================================================== 00:33:38.756 Latency(us) 00:33:38.756 Device Information : IOPS MiB/s Average min max 00:33:38.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10134.43 39.59 3157.74 602.04 8740.93 00:33:38.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2674.50 10.45 12075.54 7271.43 20057.08 00:33:38.756 ======================================================== 00:33:38.756 Total : 12808.93 50.03 5019.78 602.04 20057.08 00:33:38.756 00:33:38.756 11:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:33:38.756 11:17:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:41.286 Initializing NVMe Controllers 00:33:41.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:41.286 Controller IO queue size 128, less than required. 00:33:41.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:41.286 Controller IO queue size 128, less than required. 00:33:41.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:41.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:41.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:41.286 Initialization complete. Launching workers. 00:33:41.286 ======================================================== 00:33:41.286 Latency(us) 00:33:41.286 Device Information : IOPS MiB/s Average min max 00:33:41.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2032.97 508.24 64091.76 44345.39 112251.03 00:33:41.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.34 150.34 220959.16 104425.61 344917.47 00:33:41.286 ======================================================== 00:33:41.286 Total : 2634.32 658.58 99900.39 44345.39 344917.47 00:33:41.286 00:33:41.286 11:17:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:33:41.545 Initializing NVMe Controllers 00:33:41.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:41.545 Controller IO queue size 128, less than required. 00:33:41.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:41.545 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:33:41.545 Controller IO queue size 128, less than required. 00:33:41.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:41.545 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:33:41.545 WARNING: Some requested NVMe devices were skipped 00:33:41.545 No valid NVMe controllers or AIO or URING devices found 00:33:41.545 11:17:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:33:44.077 Initializing NVMe Controllers 00:33:44.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:44.077 Controller IO queue size 128, less than required. 00:33:44.077 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:44.077 Controller IO queue size 128, less than required. 00:33:44.077 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:44.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:44.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:44.077 Initialization complete. Launching workers. 00:33:44.077 00:33:44.077 ==================== 00:33:44.077 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:33:44.077 TCP transport: 00:33:44.077 polls: 8482 00:33:44.077 idle_polls: 5097 00:33:44.077 sock_completions: 3385 00:33:44.077 nvme_completions: 5943 00:33:44.077 submitted_requests: 8916 00:33:44.077 queued_requests: 1 00:33:44.077 00:33:44.077 ==================== 00:33:44.077 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:33:44.077 TCP transport: 00:33:44.077 polls: 10972 00:33:44.077 idle_polls: 7426 00:33:44.077 sock_completions: 3546 00:33:44.077 nvme_completions: 6289 00:33:44.077 submitted_requests: 9396 00:33:44.077 queued_requests: 1 00:33:44.077 ======================================================== 00:33:44.077 Latency(us) 00:33:44.077 Device Information : IOPS MiB/s Average min max 00:33:44.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1485.50 371.37 88950.27 55100.06 142608.84 00:33:44.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1572.00 393.00 82928.25 37263.09 140815.45 00:33:44.077 ======================================================== 00:33:44.077 Total : 3057.50 764.37 85854.07 37263.09 142608.84 00:33:44.077 00:33:44.077 11:17:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:33:44.390 11:17:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:44.664 rmmod nvme_tcp 00:33:44.664 rmmod nvme_fabrics 00:33:44.664 rmmod nvme_keyring 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 87857 ']' 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 87857 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 87857 ']' 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 87857 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87857 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87857' 00:33:44.664 killing process with pid 87857 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 87857 00:33:44.664 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 87857 00:33:45.231 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:45.231 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:33:45.231 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:33:45.231 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:45.231 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:45.231 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:45.231 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:45.490 11:17:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:33:45.490 ************************************ 00:33:45.490 END TEST nvmf_perf 00:33:45.490 ************************************ 00:33:45.490 00:33:45.490 real 0m15.245s 00:33:45.490 user 0m54.248s 00:33:45.490 sys 0m4.484s 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:45.490 ************************************ 00:33:45.490 START TEST nvmf_fio_host 00:33:45.490 ************************************ 00:33:45.490 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:45.749 * Looking for test storage... 00:33:45.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.750 --rc genhtml_branch_coverage=1 00:33:45.750 --rc genhtml_function_coverage=1 00:33:45.750 --rc genhtml_legend=1 00:33:45.750 --rc geninfo_all_blocks=1 00:33:45.750 --rc geninfo_unexecuted_blocks=1 00:33:45.750 00:33:45.750 ' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.750 --rc genhtml_branch_coverage=1 00:33:45.750 --rc genhtml_function_coverage=1 00:33:45.750 --rc genhtml_legend=1 00:33:45.750 --rc geninfo_all_blocks=1 00:33:45.750 --rc geninfo_unexecuted_blocks=1 00:33:45.750 00:33:45.750 ' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.750 --rc genhtml_branch_coverage=1 00:33:45.750 --rc genhtml_function_coverage=1 00:33:45.750 --rc genhtml_legend=1 00:33:45.750 --rc geninfo_all_blocks=1 00:33:45.750 --rc geninfo_unexecuted_blocks=1 00:33:45.750 00:33:45.750 ' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:45.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.750 --rc genhtml_branch_coverage=1 00:33:45.750 --rc genhtml_function_coverage=1 00:33:45.750 --rc genhtml_legend=1 00:33:45.750 --rc geninfo_all_blocks=1 00:33:45.750 --rc geninfo_unexecuted_blocks=1 00:33:45.750 00:33:45.750 ' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.750 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:45.751 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@223 -- # create_target_ns 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target0 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:45.751 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:46.010 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:46.011 10.0.0.1 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:46.011 10.0.0.2 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target1 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772163 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:46.011 10.0.0.3 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772164 00:33:46.011 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:46.012 10.0.0.4 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:46.012 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:46.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:33:46.272 00:33:46.272 --- 10.0.0.1 ping statistics --- 00:33:46.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.272 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:46.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:33:46.272 00:33:46.272 --- 10.0.0.2 ping statistics --- 00:33:46.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.272 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:46.272 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:46.272 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:33:46.272 00:33:46.272 --- 10.0.0.3 ping statistics --- 00:33:46.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.272 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:33:46.272 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:46.273 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:46.273 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:33:46.273 00:33:46.273 --- 10.0.0.4 ping statistics --- 00:33:46.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.273 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # return 0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:46.273 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:46.273 ' 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88386 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88386 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 88386 ']' 00:33:46.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.274 11:17:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.532 [2024-12-05 11:17:10.935964] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:46.532 [2024-12-05 11:17:10.936053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.532 [2024-12-05 11:17:11.085059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:46.532 [2024-12-05 11:17:11.145807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.532 [2024-12-05 11:17:11.145871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.532 [2024-12-05 11:17:11.145886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.532 [2024-12-05 11:17:11.145900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.532 [2024-12-05 11:17:11.145911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.532 [2024-12-05 11:17:11.147051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.533 [2024-12-05 11:17:11.147148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.533 [2024-12-05 11:17:11.147249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.533 [2024-12-05 11:17:11.147251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.790 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.791 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:46.791 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.048 [2024-12-05 11:17:11.495158] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.048 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:47.048 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:47.048 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:47.048 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:47.305 Malloc1 00:33:47.305 11:17:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:47.563 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:47.822 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.079 [2024-12-05 11:17:12.554617] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.079 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:48.338 11:17:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:48.596 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:48.596 fio-3.35 00:33:48.596 Starting 1 thread 00:33:51.141 00:33:51.141 test: (groupid=0, jobs=1): err= 0: pid=88502: Thu Dec 5 11:17:15 2024 00:33:51.141 read: IOPS=10.8k, BW=42.3MiB/s (44.4MB/s)(84.9MiB/2006msec) 00:33:51.141 slat (nsec): min=1570, max=372762, avg=1978.88, stdev=3236.66 00:33:51.141 clat (usec): min=2945, max=15041, avg=6189.66, stdev=608.63 00:33:51.141 lat (usec): min=2946, max=15043, avg=6191.64, stdev=608.69 00:33:51.141 clat percentiles (usec): 00:33:51.141 | 1.00th=[ 5014], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5800], 00:33:51.141 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:33:51.141 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6783], 95.00th=[ 7046], 00:33:51.141 | 99.00th=[ 7767], 99.50th=[ 9110], 99.90th=[12387], 99.95th=[14091], 00:33:51.141 | 99.99th=[15008] 00:33:51.141 bw ( KiB/s): min=42616, max=43960, per=100.00%, avg=43348.00, stdev=554.01, samples=4 00:33:51.141 iops : min=10654, max=10990, avg=10837.00, stdev=138.50, samples=4 00:33:51.141 write: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(84.7MiB/2006msec); 0 zone resets 00:33:51.141 slat (nsec): min=1626, max=321732, avg=2057.31, stdev=2459.18 00:33:51.141 clat (usec): min=2735, max=10211, avg=5580.44, stdev=497.96 00:33:51.141 lat (usec): min=2737, max=10213, avg=5582.50, stdev=497.92 00:33:51.141 clat percentiles (usec): 00:33:51.141 | 1.00th=[ 4359], 5.00th=[ 4883], 10.00th=[ 5080], 20.00th=[ 5276], 00:33:51.141 | 30.00th=[ 5342], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5669], 00:33:51.141 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 6128], 95.00th=[ 6325], 00:33:51.141 | 99.00th=[ 6849], 99.50th=[ 8029], 99.90th=[ 9372], 99.95th=[ 9503], 00:33:51.141 | 99.99th=[ 9896] 00:33:51.141 bw ( KiB/s): min=42936, max=43712, per=100.00%, avg=43258.00, stdev=344.24, samples=4 00:33:51.141 iops : min=10734, max=10928, avg=10814.50, stdev=86.06, samples=4 00:33:51.141 lat (msec) : 4=0.33%, 10=99.49%, 20=0.17% 00:33:51.141 cpu : usr=65.84%, sys=26.73%, ctx=8, majf=0, minf=7 00:33:51.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:51.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.141 issued rwts: total=21736,21694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.141 00:33:51.141 Run status group 0 (all jobs): 00:33:51.141 READ: bw=42.3MiB/s (44.4MB/s), 42.3MiB/s-42.3MiB/s (44.4MB/s-44.4MB/s), io=84.9MiB (89.0MB), run=2006-2006msec 00:33:51.141 WRITE: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=84.7MiB (88.9MB), run=2006-2006msec 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:33:51.141 11:17:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:51.141 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:51.141 fio-3.35 00:33:51.141 Starting 1 thread 00:33:53.672 00:33:53.672 test: (groupid=0, jobs=1): err= 0: pid=88549: Thu Dec 5 11:17:17 2024 00:33:53.672 read: IOPS=8499, BW=133MiB/s (139MB/s)(267MiB/2009msec) 00:33:53.672 slat (nsec): min=2540, max=98489, avg=3124.85, stdev=1595.15 00:33:53.672 clat (usec): min=2168, max=18112, avg=8691.91, stdev=2248.87 00:33:53.672 lat (usec): min=2170, max=18117, avg=8695.04, stdev=2249.08 00:33:53.672 clat percentiles (usec): 00:33:53.672 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6718], 00:33:53.672 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 8979], 00:33:53.672 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[11600], 95.00th=[12387], 00:33:53.672 | 99.00th=[14615], 99.50th=[15926], 99.90th=[17433], 99.95th=[17695], 00:33:53.672 | 99.99th=[17957] 00:33:53.672 bw ( KiB/s): min=67968, max=74464, per=51.62%, avg=70200.00, stdev=2914.09, samples=4 00:33:53.672 iops : min= 4248, max= 4654, avg=4387.50, stdev=182.13, samples=4 00:33:53.672 write: IOPS=4976, BW=77.8MiB/s (81.5MB/s)(143MiB/1842msec); 0 zone resets 00:33:53.672 slat (usec): min=29, max=397, avg=35.13, stdev=10.04 00:33:53.672 clat (usec): min=5607, max=20652, avg=11053.86, stdev=2283.88 00:33:53.672 lat (usec): min=5637, max=20682, avg=11088.99, stdev=2286.59 00:33:53.672 clat percentiles (usec): 00:33:53.672 | 1.00th=[ 6915], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[ 8979], 00:33:53.672 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:33:53.672 | 70.00th=[12256], 80.00th=[13042], 90.00th=[14091], 95.00th=[15008], 00:33:53.672 | 99.00th=[16450], 99.50th=[17433], 99.90th=[20055], 99.95th=[20317], 00:33:53.672 | 99.99th=[20579] 00:33:53.672 bw ( KiB/s): min=70912, max=75776, per=91.46%, avg=72816.00, stdev=2302.30, samples=4 00:33:53.672 iops : min= 4432, max= 4736, avg=4551.00, stdev=143.89, samples=4 00:33:53.672 lat (msec) : 4=0.29%, 10=59.87%, 20=39.80%, 50=0.04% 00:33:53.672 cpu : usr=71.81%, sys=19.92%, ctx=4, majf=0, minf=6 00:33:53.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:33:53.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:53.672 issued rwts: total=17076,9166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:53.672 00:33:53.672 Run status group 0 (all jobs): 00:33:53.672 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=267MiB (280MB), run=2009-2009msec 00:33:53.672 WRITE: bw=77.8MiB/s (81.5MB/s), 77.8MiB/s-77.8MiB/s (81.5MB/s-81.5MB/s), io=143MiB (150MB), run=1842-1842msec 00:33:53.672 11:17:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:53.672 rmmod nvme_tcp 00:33:53.672 rmmod nvme_fabrics 00:33:53.672 rmmod nvme_keyring 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 88386 ']' 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 88386 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 88386 ']' 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 88386 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88386 00:33:53.672 killing process with pid 88386 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:53.672 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88386' 00:33:53.673 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 88386 00:33:53.673 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 88386 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:33:53.932 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:33:54.195 00:33:54.195 real 0m8.609s 00:33:54.195 user 0m33.525s 00:33:54.195 sys 0m2.705s 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.195 ************************************ 00:33:54.195 END TEST nvmf_fio_host 00:33:54.195 ************************************ 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:54.195 ************************************ 00:33:54.195 START TEST nvmf_failover 00:33:54.195 ************************************ 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:54.195 * Looking for test storage... 00:33:54.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:54.195 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:54.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.456 --rc genhtml_branch_coverage=1 00:33:54.456 --rc genhtml_function_coverage=1 00:33:54.456 --rc genhtml_legend=1 00:33:54.456 --rc geninfo_all_blocks=1 00:33:54.456 --rc geninfo_unexecuted_blocks=1 00:33:54.456 00:33:54.456 ' 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:54.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.456 --rc genhtml_branch_coverage=1 00:33:54.456 --rc genhtml_function_coverage=1 00:33:54.456 --rc genhtml_legend=1 00:33:54.456 --rc geninfo_all_blocks=1 00:33:54.456 --rc geninfo_unexecuted_blocks=1 00:33:54.456 00:33:54.456 ' 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:54.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.456 --rc genhtml_branch_coverage=1 00:33:54.456 --rc genhtml_function_coverage=1 00:33:54.456 --rc genhtml_legend=1 00:33:54.456 --rc geninfo_all_blocks=1 00:33:54.456 --rc geninfo_unexecuted_blocks=1 00:33:54.456 00:33:54.456 ' 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:54.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:54.456 --rc genhtml_branch_coverage=1 00:33:54.456 --rc genhtml_function_coverage=1 00:33:54.456 --rc genhtml_legend=1 00:33:54.456 --rc geninfo_all_blocks=1 00:33:54.456 --rc geninfo_unexecuted_blocks=1 00:33:54.456 00:33:54.456 ' 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:54.456 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:54.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@280 -- # nvmf_veth_init 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@223 -- # create_target_ns 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # create_main_bridge 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@105 -- # delete_main_bridge 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator0 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:54.457 11:17:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target0 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0 up 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target0_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target0 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:33:54.457 10.0.0.1 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:33:54.457 10.0.0.2 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator0 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:33:54.457 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:33:54.716 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target0_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator1 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target1 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1 up 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target1_br 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target1 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772163 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:33:54.717 10.0.0.3 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772164 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:33:54.717 10.0.0.4 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator1 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:33:54.717 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target1_br 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 2 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:54.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:33:54.718 00:33:54.718 --- 10.0.0.1 ping statistics --- 00:33:54.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.718 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:54.718 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:54.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:33:54.719 00:33:54.719 --- 10.0.0.2 ping statistics --- 00:33:54.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.719 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:33:54.719 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:54.719 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:33:54.719 00:33:54.719 --- 10.0.0.3 ping statistics --- 00:33:54.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.719 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:33:54.719 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:33:54.978 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:54.978 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.085 ms 00:33:54.978 00:33:54.978 --- 10.0.0.4 ping statistics --- 00:33:54.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.978 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # return 0 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:54.978 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:54.979 ' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=88820 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 88820 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88820 ']' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:54.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:54.979 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:54.979 [2024-12-05 11:17:19.524205] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:54.979 [2024-12-05 11:17:19.524276] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.237 [2024-12-05 11:17:19.676215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:55.238 [2024-12-05 11:17:19.740391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.238 [2024-12-05 11:17:19.740455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.238 [2024-12-05 11:17:19.740472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.238 [2024-12-05 11:17:19.740485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.238 [2024-12-05 11:17:19.740496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.238 [2024-12-05 11:17:19.741493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.238 [2024-12-05 11:17:19.741625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.238 [2024-12-05 11:17:19.742049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:55.238 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:55.238 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:55.238 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:55.238 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:55.238 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:55.496 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.496 11:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:55.754 [2024-12-05 11:17:20.173014] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.754 11:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:56.012 Malloc0 00:33:56.012 11:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:56.271 11:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:56.528 11:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.528 [2024-12-05 11:17:21.179299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.785 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:56.785 [2024-12-05 11:17:21.395581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:56.785 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:57.350 [2024-12-05 11:17:21.720047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88918 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88918 /var/tmp/bdevperf.sock 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88918 ']' 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:57.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.350 11:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:58.287 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.287 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:58.287 11:17:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:58.545 NVMe0n1 00:33:58.545 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:58.803 00:33:58.803 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88966 00:33:58.803 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:58.803 11:17:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:59.740 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.000 [2024-12-05 11:17:24.569828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.000 [2024-12-05 11:17:24.569898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.000 [2024-12-05 11:17:24.569909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.000 [2024-12-05 11:17:24.569919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.000 [2024-12-05 11:17:24.569928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.000 [2024-12-05 11:17:24.569936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.569946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.569954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.569963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.569973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.569981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.569990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.569998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 [2024-12-05 11:17:24.570256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa830 is same with the state(6) to be set 00:34:00.001 11:17:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:03.303 11:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:03.303 00:34:03.561 11:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:03.820 [2024-12-05 11:17:28.239273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.820 [2024-12-05 11:17:28.239677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 [2024-12-05 11:17:28.239971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab5e0 is same with the state(6) to be set 00:34:03.821 11:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:07.171 11:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:07.171 [2024-12-05 11:17:31.489856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.171 11:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:08.105 11:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:08.105 [2024-12-05 11:17:32.737957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f5550 is same with the state(6) to be set 00:34:08.105 [2024-12-05 11:17:32.738036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f5550 is same with the state(6) to be set 00:34:08.364 11:17:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88966 00:34:14.929 { 00:34:14.929 "results": [ 00:34:14.929 { 00:34:14.929 "job": "NVMe0n1", 00:34:14.929 "core_mask": "0x1", 00:34:14.929 "workload": "verify", 00:34:14.929 "status": "finished", 00:34:14.929 "verify_range": { 00:34:14.929 "start": 0, 00:34:14.929 "length": 16384 00:34:14.929 }, 00:34:14.929 "queue_depth": 128, 00:34:14.929 "io_size": 4096, 00:34:14.929 "runtime": 15.007581, 00:34:14.929 "iops": 9637.529192745986, 00:34:14.929 "mibps": 37.64659840916401, 00:34:14.929 "io_failed": 3749, 00:34:14.929 "io_timeout": 0, 00:34:14.929 "avg_latency_us": 12918.521395199425, 00:34:14.929 "min_latency_us": 737.28, 00:34:14.929 "max_latency_us": 42941.68380952381 00:34:14.929 } 00:34:14.929 ], 00:34:14.929 "core_count": 1 00:34:14.929 } 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88918 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88918 ']' 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88918 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88918 00:34:14.929 killing process with pid 88918 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88918' 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88918 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88918 00:34:14.929 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:14.929 [2024-12-05 11:17:21.781946] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:14.929 [2024-12-05 11:17:21.782037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88918 ] 00:34:14.929 [2024-12-05 11:17:21.933052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.929 [2024-12-05 11:17:21.998744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.929 Running I/O for 15 seconds... 00:34:14.929 9619.00 IOPS, 37.57 MiB/s [2024-12-05T11:17:39.581Z] [2024-12-05 11:17:24.571230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.929 [2024-12-05 11:17:24.571311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.929 [2024-12-05 11:17:24.571343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.929 [2024-12-05 11:17:24.571374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.929 [2024-12-05 11:17:24.571404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.929 [2024-12-05 11:17:24.571435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.929 [2024-12-05 11:17:24.571465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.929 [2024-12-05 11:17:24.571495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.929 [2024-12-05 11:17:24.571509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.571980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.571995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.930 [2024-12-05 11:17:24.572336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.930 [2024-12-05 11:17:24.572903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.930 [2024-12-05 11:17:24.572918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.572935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.572957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.572974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.572989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.573981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.573996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.931 [2024-12-05 11:17:24.574248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.931 [2024-12-05 11:17:24.574264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.574677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.574974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.574990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.575314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.932 [2024-12-05 11:17:24.575344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.932 [2024-12-05 11:17:24.575531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.932 [2024-12-05 11:17:24.575545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b4620 is same with the state(6) to be set 00:34:14.932 [2024-12-05 11:17:24.575561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.933 [2024-12-05 11:17:24.575571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.933 [2024-12-05 11:17:24.575582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87800 len:8 PRP1 0x0 PRP2 0x0 00:34:14.933 [2024-12-05 11:17:24.575595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:24.575659] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:14.933 [2024-12-05 11:17:24.575718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.933 [2024-12-05 11:17:24.575734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:24.575748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.933 [2024-12-05 11:17:24.575761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:24.575774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.933 [2024-12-05 11:17:24.575787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:24.575800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.933 [2024-12-05 11:17:24.575813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:24.575826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:14.933 [2024-12-05 11:17:24.578846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:14.933 [2024-12-05 11:17:24.578889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247f30 (9): Bad file descriptor 00:34:14.933 [2024-12-05 11:17:24.607783] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:14.933 9772.00 IOPS, 38.17 MiB/s [2024-12-05T11:17:39.585Z] 10007.00 IOPS, 39.09 MiB/s [2024-12-05T11:17:39.585Z] 10139.50 IOPS, 39.61 MiB/s [2024-12-05T11:17:39.585Z] [2024-12-05 11:17:28.240320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.933 [2024-12-05 11:17:28.240868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.240909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.240942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.240975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.240993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.933 [2024-12-05 11:17:28.241341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.933 [2024-12-05 11:17:28.241357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.241972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.241987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.934 [2024-12-05 11:17:28.242524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.934 [2024-12-05 11:17:28.242539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.242966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.242982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.935 [2024-12-05 11:17:28.243807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.935 [2024-12-05 11:17:28.243821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.243836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:28.243851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.243867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:28.243881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.243896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:28.243911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.243926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:28.243941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.243956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:28.243971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.243987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:28.244001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.936 [2024-12-05 11:17:28.244578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.936 [2024-12-05 11:17:28.244640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.936 [2024-12-05 11:17:28.244658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:8 PRP1 0x0 PRP2 0x0 00:34:14.936 [2024-12-05 11:17:28.244674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244732] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:14.936 [2024-12-05 11:17:28.244787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.936 [2024-12-05 11:17:28.244804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.936 [2024-12-05 11:17:28.244835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.936 [2024-12-05 11:17:28.244866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.936 [2024-12-05 11:17:28.244897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:28.244915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:14.936 [2024-12-05 11:17:28.244950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247f30 (9): Bad file descriptor 00:34:14.936 [2024-12-05 11:17:28.247996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:14.936 [2024-12-05 11:17:28.269669] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:14.936 10059.60 IOPS, 39.30 MiB/s [2024-12-05T11:17:39.588Z] 9888.67 IOPS, 38.63 MiB/s [2024-12-05T11:17:39.588Z] 9935.71 IOPS, 38.81 MiB/s [2024-12-05T11:17:39.588Z] 9908.12 IOPS, 38.70 MiB/s [2024-12-05T11:17:39.588Z] 9881.67 IOPS, 38.60 MiB/s [2024-12-05T11:17:39.588Z] [2024-12-05 11:17:32.739894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.739957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.739984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.740001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.740019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.740035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.740052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.740076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.740094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.740110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.740154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.740169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.740186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.740202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.740219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.936 [2024-12-05 11:17:32.740234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.936 [2024-12-05 11:17:32.740251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.937 [2024-12-05 11:17:32.740330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.937 [2024-12-05 11:17:32.740362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.937 [2024-12-05 11:17:32.740395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:14.937 [2024-12-05 11:17:32.740428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.740969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.740992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.937 [2024-12-05 11:17:32.741560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.937 [2024-12-05 11:17:32.741576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.741970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.741986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.742000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:14.938 [2024-12-05 11:17:32.742030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.938 [2024-12-05 11:17:32.742659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.938 [2024-12-05 11:17:32.742676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:34:14.938 [2024-12-05 11:17:32.742692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.938 [2024-12-05 11:17:32.742707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.742718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.742729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.742744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.742759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.742771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.742782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.742797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.742812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.742823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.742835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.742850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.742865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.742876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.742887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.742902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.742917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.742928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.742939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.742954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.742969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.742980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.742992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100680 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100688 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100760 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100768 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100776 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100784 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100792 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100800 len:8 PRP1 0x0 PRP2 0x0 00:34:14.939 [2024-12-05 11:17:32.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.939 [2024-12-05 11:17:32.743950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.939 [2024-12-05 11:17:32.743960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.939 [2024-12-05 11:17:32.743971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.743985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.743999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100832 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100888 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100896 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100904 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100912 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100920 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100928 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.744947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.744958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.744974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.744989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.745001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.745012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100960 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.745027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.745042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.745053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.745064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100968 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.745079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.745095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.745107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.745118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100976 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.745133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.745148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.745159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.745170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100984 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.745190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.745205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.940 [2024-12-05 11:17:32.745216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.940 [2024-12-05 11:17:32.745228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100992 len:8 PRP1 0x0 PRP2 0x0 00:34:14.940 [2024-12-05 11:17:32.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.940 [2024-12-05 11:17:32.745258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.941 [2024-12-05 11:17:32.745268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.941 [2024-12-05 11:17:32.745280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100008 len:8 PRP1 0x0 PRP2 0x0 00:34:14.941 [2024-12-05 11:17:32.745294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.745310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.941 [2024-12-05 11:17:32.745320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.941 [2024-12-05 11:17:32.745332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100016 len:8 PRP1 0x0 PRP2 0x0 00:34:14.941 [2024-12-05 11:17:32.745346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.745362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.941 [2024-12-05 11:17:32.760611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.941 [2024-12-05 11:17:32.760652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100024 len:8 PRP1 0x0 PRP2 0x0 00:34:14.941 [2024-12-05 11:17:32.760671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.760693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.941 [2024-12-05 11:17:32.760706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.941 [2024-12-05 11:17:32.760718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:34:14.941 [2024-12-05 11:17:32.760733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.760749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.941 [2024-12-05 11:17:32.760760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.941 [2024-12-05 11:17:32.760772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100040 len:8 PRP1 0x0 PRP2 0x0 00:34:14.941 [2024-12-05 11:17:32.760787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.760802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.941 [2024-12-05 11:17:32.760813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.941 [2024-12-05 11:17:32.760824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100048 len:8 PRP1 0x0 PRP2 0x0 00:34:14.941 [2024-12-05 11:17:32.760840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.760856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:14.941 [2024-12-05 11:17:32.760867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:14.941 [2024-12-05 11:17:32.760893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:34:14.941 [2024-12-05 11:17:32.760908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.760976] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:14.941 [2024-12-05 11:17:32.761050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.941 [2024-12-05 11:17:32.761069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.761087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.941 [2024-12-05 11:17:32.761102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.761128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.941 [2024-12-05 11:17:32.761143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.761157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:14.941 [2024-12-05 11:17:32.761172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:14.941 [2024-12-05 11:17:32.761186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:14.941 [2024-12-05 11:17:32.761243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1247f30 (9): Bad file descriptor 00:34:14.941 [2024-12-05 11:17:32.764574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:14.941 [2024-12-05 11:17:32.790911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:14.941 9799.20 IOPS, 38.28 MiB/s [2024-12-05T11:17:39.593Z] 9747.82 IOPS, 38.08 MiB/s [2024-12-05T11:17:39.593Z] 9706.33 IOPS, 37.92 MiB/s [2024-12-05T11:17:39.593Z] 9662.38 IOPS, 37.74 MiB/s [2024-12-05T11:17:39.593Z] 9653.86 IOPS, 37.71 MiB/s [2024-12-05T11:17:39.593Z] 9641.73 IOPS, 37.66 MiB/s 00:34:14.941 Latency(us) 00:34:14.941 [2024-12-05T11:17:39.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.941 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:14.941 Verification LBA range: start 0x0 length 0x4000 00:34:14.941 NVMe0n1 : 15.01 9637.53 37.65 249.81 0.00 12918.52 737.28 42941.68 00:34:14.941 [2024-12-05T11:17:39.593Z] =================================================================================================================== 00:34:14.941 [2024-12-05T11:17:39.593Z] Total : 9637.53 37.65 249.81 0.00 12918.52 737.28 42941.68 00:34:14.941 Received shutdown signal, test time was about 15.000000 seconds 00:34:14.941 00:34:14.941 Latency(us) 00:34:14.941 [2024-12-05T11:17:39.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:14.941 [2024-12-05T11:17:39.593Z] =================================================================================================================== 00:34:14.941 [2024-12-05T11:17:39.593Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89170 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89170 /var/tmp/bdevperf.sock 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89170 ']' 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.941 11:17:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:15.200 11:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.200 11:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:15.200 11:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:15.458 [2024-12-05 11:17:39.955107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:15.459 11:17:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:15.716 [2024-12-05 11:17:40.163485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:15.716 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:15.975 NVMe0n1 00:34:15.975 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:16.233 00:34:16.233 11:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:16.493 00:34:16.751 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:16.751 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:16.751 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:17.009 11:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:20.294 11:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:20.294 11:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:20.294 11:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:20.294 11:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89311 00:34:20.294 11:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89311 00:34:21.678 { 00:34:21.678 "results": [ 00:34:21.678 { 00:34:21.678 "job": "NVMe0n1", 00:34:21.678 "core_mask": "0x1", 00:34:21.678 "workload": "verify", 00:34:21.678 "status": "finished", 00:34:21.678 "verify_range": { 00:34:21.678 "start": 0, 00:34:21.678 "length": 16384 00:34:21.678 }, 00:34:21.678 "queue_depth": 128, 00:34:21.678 "io_size": 4096, 00:34:21.678 "runtime": 1.009025, 00:34:21.678 "iops": 9735.140358266644, 00:34:21.678 "mibps": 38.02789202447908, 00:34:21.678 "io_failed": 0, 00:34:21.678 "io_timeout": 0, 00:34:21.678 "avg_latency_us": 13094.533719210986, 00:34:21.678 "min_latency_us": 1864.655238095238, 00:34:21.678 "max_latency_us": 15042.07238095238 00:34:21.678 } 00:34:21.678 ], 00:34:21.678 "core_count": 1 00:34:21.678 } 00:34:21.678 11:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:21.678 [2024-12-05 11:17:38.747255] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:21.678 [2024-12-05 11:17:38.747369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89170 ] 00:34:21.678 [2024-12-05 11:17:38.894160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.678 [2024-12-05 11:17:38.949679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.678 [2024-12-05 11:17:41.538001] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:21.678 [2024-12-05 11:17:41.538104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.678 [2024-12-05 11:17:41.538126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-12-05 11:17:41.538143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.678 [2024-12-05 11:17:41.538158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-12-05 11:17:41.538173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.678 [2024-12-05 11:17:41.538187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-12-05 11:17:41.538202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:21.678 [2024-12-05 11:17:41.538216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:21.678 [2024-12-05 11:17:41.538231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:21.678 [2024-12-05 11:17:41.538277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:21.678 [2024-12-05 11:17:41.538303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5ef30 (9): Bad file descriptor 00:34:21.678 [2024-12-05 11:17:41.547255] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:21.678 Running I/O for 1 seconds... 00:34:21.678 9695.00 IOPS, 37.87 MiB/s 00:34:21.678 Latency(us) 00:34:21.678 [2024-12-05T11:17:46.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.678 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:21.678 Verification LBA range: start 0x0 length 0x4000 00:34:21.678 NVMe0n1 : 1.01 9735.14 38.03 0.00 0.00 13094.53 1864.66 15042.07 00:34:21.678 [2024-12-05T11:17:46.330Z] =================================================================================================================== 00:34:21.678 [2024-12-05T11:17:46.330Z] Total : 9735.14 38.03 0.00 0.00 13094.53 1864.66 15042.07 00:34:21.678 11:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:21.678 11:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:21.678 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:21.936 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:21.936 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:22.194 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:22.450 11:17:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:25.735 11:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:25.735 11:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:25.735 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89170 00:34:25.735 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89170 ']' 00:34:25.735 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89170 00:34:25.735 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:25.735 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.735 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89170 00:34:25.735 killing process with pid 89170 00:34:25.735 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.736 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.736 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89170' 00:34:25.736 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89170 00:34:25.736 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89170 00:34:25.994 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:25.994 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:26.253 rmmod nvme_tcp 00:34:26.253 rmmod nvme_fabrics 00:34:26.253 rmmod nvme_keyring 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 88820 ']' 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 88820 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88820 ']' 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88820 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88820 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88820' 00:34:26.253 killing process with pid 88820 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88820 00:34:26.253 11:17:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88820 00:34:26.512 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:26.512 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:34:26.512 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:34:26.512 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:26.512 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:26.512 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:26.512 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:34:26.771 00:34:26.771 real 0m32.579s 00:34:26.771 user 2m4.648s 00:34:26.771 sys 0m5.785s 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:26.771 ************************************ 00:34:26.771 END TEST nvmf_failover 00:34:26.771 ************************************ 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.771 ************************************ 00:34:26.771 START TEST nvmf_host_discovery 00:34:26.771 ************************************ 00:34:26.771 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:27.031 * Looking for test storage... 00:34:27.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:27.031 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:27.031 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:27.031 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:27.031 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:27.031 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.031 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:27.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.032 --rc genhtml_branch_coverage=1 00:34:27.032 --rc genhtml_function_coverage=1 00:34:27.032 --rc genhtml_legend=1 00:34:27.032 --rc geninfo_all_blocks=1 00:34:27.032 --rc geninfo_unexecuted_blocks=1 00:34:27.032 00:34:27.032 ' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:27.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.032 --rc genhtml_branch_coverage=1 00:34:27.032 --rc genhtml_function_coverage=1 00:34:27.032 --rc genhtml_legend=1 00:34:27.032 --rc geninfo_all_blocks=1 00:34:27.032 --rc geninfo_unexecuted_blocks=1 00:34:27.032 00:34:27.032 ' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:27.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.032 --rc genhtml_branch_coverage=1 00:34:27.032 --rc genhtml_function_coverage=1 00:34:27.032 --rc genhtml_legend=1 00:34:27.032 --rc geninfo_all_blocks=1 00:34:27.032 --rc geninfo_unexecuted_blocks=1 00:34:27.032 00:34:27.032 ' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:27.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.032 --rc genhtml_branch_coverage=1 00:34:27.032 --rc genhtml_function_coverage=1 00:34:27.032 --rc genhtml_legend=1 00:34:27.032 --rc geninfo_all_blocks=1 00:34:27.032 --rc geninfo_unexecuted_blocks=1 00:34:27.032 00:34:27.032 ' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:27.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:27.032 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:34:27.033 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:34:27.293 10.0.0.1 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:27.293 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:34:27.293 10.0.0.2 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:34:27.294 10.0.0.3 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:34:27.294 10.0.0.4 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:34:27.294 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:27.295 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:27.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:27.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:34:27.555 00:34:27.555 --- 10.0.0.1 ping statistics --- 00:34:27.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.555 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:27.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:27.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:34:27.555 00:34:27.555 --- 10.0.0.2 ping statistics --- 00:34:27.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.555 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:34:27.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:27.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:34:27.555 00:34:27.555 --- 10.0.0.3 ping statistics --- 00:34:27.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.555 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:34:27.555 11:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:34:27.555 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:27.555 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.104 ms 00:34:27.555 00:34:27.555 --- 10.0.0.4 ping statistics --- 00:34:27.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.555 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # return 0 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:27.555 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:34:27.556 ' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=89664 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 89664 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89664 ']' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.556 11:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:27.556 [2024-12-05 11:17:52.188550] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:27.556 [2024-12-05 11:17:52.188666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.815 [2024-12-05 11:17:52.338091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.815 [2024-12-05 11:17:52.412112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.815 [2024-12-05 11:17:52.412179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.815 [2024-12-05 11:17:52.412191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.815 [2024-12-05 11:17:52.412201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.815 [2024-12-05 11:17:52.412210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.815 [2024-12-05 11:17:52.412610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 [2024-12-05 11:17:53.295131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 [2024-12-05 11:17:53.303321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 null0 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 null1 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89715 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89715 /tmp/host.sock 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89715 ']' 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:28.752 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.752 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.752 [2024-12-05 11:17:53.387845] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:28.752 [2024-12-05 11:17:53.387938] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89715 ] 00:34:29.011 [2024-12-05 11:17:53.529880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.011 [2024-12-05 11:17:53.580135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.288 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.557 11:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.557 [2024-12-05 11:17:54.043508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.557 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.558 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.816 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.816 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:29.816 11:17:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:30.076 [2024-12-05 11:17:54.720853] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:30.076 [2024-12-05 11:17:54.720889] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:30.076 [2024-12-05 11:17:54.720904] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:30.335 [2024-12-05 11:17:54.806987] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:30.335 [2024-12-05 11:17:54.861338] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:30.335 [2024-12-05 11:17:54.862083] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1746860:1 started. 00:34:30.335 [2024-12-05 11:17:54.863878] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:30.335 [2024-12-05 11:17:54.863902] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:30.335 [2024-12-05 11:17:54.869051] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1746860 was disconnected and freed. delete nvme_qpair. 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.901 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:30.902 [2024-12-05 11:17:55.483051] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1746e10:1 started. 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.902 [2024-12-05 11:17:55.489930] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1746e10 was disconnected and freed. delete nvme_qpair. 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.902 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:31.160 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.160 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:31.160 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:31.160 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.161 [2024-12-05 11:17:55.588918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:31.161 [2024-12-05 11:17:55.590112] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:31.161 [2024-12-05 11:17:55.590148] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.161 [2024-12-05 11:17:55.676178] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:31.161 11:17:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:31.161 [2024-12-05 11:17:55.734602] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:31.161 [2024-12-05 11:17:55.734678] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:31.161 [2024-12-05 11:17:55.734688] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:31.161 [2024-12-05 11:17:55.734695] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:32.097 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.358 [2024-12-05 11:17:56.821878] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:32.358 [2024-12-05 11:17:56.821914] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.358 [2024-12-05 11:17:56.831537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.358 [2024-12-05 11:17:56.831570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.358 [2024-12-05 11:17:56.831584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.358 [2024-12-05 11:17:56.831608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.358 [2024-12-05 11:17:56.831619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.358 [2024-12-05 11:17:56.831630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.358 [2024-12-05 11:17:56.831641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:32.358 [2024-12-05 11:17:56.831651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:32.358 [2024-12-05 11:17:56.831662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c15c0 is same with the state(6) to be set 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.358 [2024-12-05 11:17:56.841498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c15c0 (9): Bad file descriptor 00:34:32.358 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.358 [2024-12-05 11:17:56.851517] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:32.358 [2024-12-05 11:17:56.851541] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:32.358 [2024-12-05 11:17:56.851548] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:32.358 [2024-12-05 11:17:56.851555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:32.358 [2024-12-05 11:17:56.851595] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:32.358 [2024-12-05 11:17:56.851690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.358 [2024-12-05 11:17:56.851710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c15c0 with addr=10.0.0.2, port=4420 00:34:32.358 [2024-12-05 11:17:56.851723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c15c0 is same with the state(6) to be set 00:34:32.358 [2024-12-05 11:17:56.851739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c15c0 (9): Bad file descriptor 00:34:32.358 [2024-12-05 11:17:56.851755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:32.358 [2024-12-05 11:17:56.851764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:32.358 [2024-12-05 11:17:56.851777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:32.358 [2024-12-05 11:17:56.851786] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:32.358 [2024-12-05 11:17:56.851794] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:32.358 [2024-12-05 11:17:56.851800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:32.358 [2024-12-05 11:17:56.861595] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:32.358 [2024-12-05 11:17:56.861618] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:32.358 [2024-12-05 11:17:56.861624] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:32.358 [2024-12-05 11:17:56.861629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:32.358 [2024-12-05 11:17:56.861650] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:32.358 [2024-12-05 11:17:56.861693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.358 [2024-12-05 11:17:56.861707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c15c0 with addr=10.0.0.2, port=4420 00:34:32.358 [2024-12-05 11:17:56.861717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c15c0 is same with the state(6) to be set 00:34:32.358 [2024-12-05 11:17:56.861729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c15c0 (9): Bad file descriptor 00:34:32.358 [2024-12-05 11:17:56.861741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:32.358 [2024-12-05 11:17:56.861750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:32.358 [2024-12-05 11:17:56.861759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:32.358 [2024-12-05 11:17:56.861766] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:32.358 [2024-12-05 11:17:56.861771] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:32.359 [2024-12-05 11:17:56.861776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:32.359 [2024-12-05 11:17:56.871659] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:32.359 [2024-12-05 11:17:56.871677] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:32.359 [2024-12-05 11:17:56.871683] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.871688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:32.359 [2024-12-05 11:17:56.871711] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.871753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.359 [2024-12-05 11:17:56.871768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c15c0 with addr=10.0.0.2, port=4420 00:34:32.359 [2024-12-05 11:17:56.871777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c15c0 is same with the state(6) to be set 00:34:32.359 [2024-12-05 11:17:56.871790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c15c0 (9): Bad file descriptor 00:34:32.359 [2024-12-05 11:17:56.871802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:32.359 [2024-12-05 11:17:56.871810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:32.359 [2024-12-05 11:17:56.871819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:32.359 [2024-12-05 11:17:56.871826] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:32.359 [2024-12-05 11:17:56.871832] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:32.359 [2024-12-05 11:17:56.871837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:32.359 [2024-12-05 11:17:56.881719] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:32.359 [2024-12-05 11:17:56.881738] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:32.359 [2024-12-05 11:17:56.881743] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.881749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:32.359 [2024-12-05 11:17:56.881768] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.881806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.359 [2024-12-05 11:17:56.881819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c15c0 with addr=10.0.0.2, port=4420 00:34:32.359 [2024-12-05 11:17:56.881829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c15c0 is same with the state(6) to be set 00:34:32.359 [2024-12-05 11:17:56.881841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c15c0 (9): Bad file descriptor 00:34:32.359 [2024-12-05 11:17:56.881852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:32.359 [2024-12-05 11:17:56.881861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:32.359 [2024-12-05 11:17:56.881870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:32.359 [2024-12-05 11:17:56.881877] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:32.359 [2024-12-05 11:17:56.881883] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:32.359 [2024-12-05 11:17:56.881888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.359 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.359 [2024-12-05 11:17:56.891777] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:32.359 [2024-12-05 11:17:56.891809] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:32.359 [2024-12-05 11:17:56.891815] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.891821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:32.359 [2024-12-05 11:17:56.891861] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.891913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.359 [2024-12-05 11:17:56.891929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c15c0 with addr=10.0.0.2, port=4420 00:34:32.359 [2024-12-05 11:17:56.891939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c15c0 is same with the state(6) to be set 00:34:32.359 [2024-12-05 11:17:56.891953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c15c0 (9): Bad file descriptor 00:34:32.359 [2024-12-05 11:17:56.891966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:32.359 [2024-12-05 11:17:56.891992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:32.359 [2024-12-05 11:17:56.892002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:32.359 [2024-12-05 11:17:56.892011] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:32.359 [2024-12-05 11:17:56.892017] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:32.359 [2024-12-05 11:17:56.892023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:32.359 [2024-12-05 11:17:56.901869] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:32.359 [2024-12-05 11:17:56.901888] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:32.359 [2024-12-05 11:17:56.901894] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.901899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:32.359 [2024-12-05 11:17:56.901922] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:32.359 [2024-12-05 11:17:56.901964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.359 [2024-12-05 11:17:56.901979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c15c0 with addr=10.0.0.2, port=4420 00:34:32.359 [2024-12-05 11:17:56.901989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c15c0 is same with the state(6) to be set 00:34:32.359 [2024-12-05 11:17:56.902002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c15c0 (9): Bad file descriptor 00:34:32.359 [2024-12-05 11:17:56.902015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:32.359 [2024-12-05 11:17:56.902024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:32.359 [2024-12-05 11:17:56.902034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:32.359 [2024-12-05 11:17:56.902042] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:32.360 [2024-12-05 11:17:56.902048] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:32.360 [2024-12-05 11:17:56.902053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:32.360 [2024-12-05 11:17:56.908675] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:32.360 [2024-12-05 11:17:56.908698] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:32.360 11:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:32.360 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:32.360 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.360 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.360 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:32.360 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.620 11:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.997 [2024-12-05 11:17:58.241030] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:33.997 [2024-12-05 11:17:58.241068] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:33.997 [2024-12-05 11:17:58.241084] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:33.997 [2024-12-05 11:17:58.327127] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:33.997 [2024-12-05 11:17:58.385497] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:33.998 [2024-12-05 11:17:58.386312] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16c1260:1 started. 00:34:33.998 [2024-12-05 11:17:58.388778] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:33.998 [2024-12-05 11:17:58.388825] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:33.998 [2024-12-05 11:17:58.390051] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16c1260 was disconnected and freed. delete nvme_qpair. 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.998 2024/12/05 11:17:58 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:34:33.998 request: 00:34:33.998 { 00:34:33.998 "method": "bdev_nvme_start_discovery", 00:34:33.998 "params": { 00:34:33.998 "name": "nvme", 00:34:33.998 "trtype": "tcp", 00:34:33.998 "traddr": "10.0.0.2", 00:34:33.998 "adrfam": "ipv4", 00:34:33.998 "trsvcid": "8009", 00:34:33.998 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:33.998 "wait_for_attach": true 00:34:33.998 } 00:34:33.998 } 00:34:33.998 Got JSON-RPC error response 00:34:33.998 GoRPCClient: error on JSON-RPC call 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.998 2024/12/05 11:17:58 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:34:33.998 request: 00:34:33.998 { 00:34:33.998 "method": "bdev_nvme_start_discovery", 00:34:33.998 "params": { 00:34:33.998 "name": "nvme_second", 00:34:33.998 "trtype": "tcp", 00:34:33.998 "traddr": "10.0.0.2", 00:34:33.998 "adrfam": "ipv4", 00:34:33.998 "trsvcid": "8009", 00:34:33.998 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:33.998 "wait_for_attach": true 00:34:33.998 } 00:34:33.998 } 00:34:33.998 Got JSON-RPC error response 00:34:33.998 GoRPCClient: error on JSON-RPC call 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:33.998 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.999 11:17:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.385 [2024-12-05 11:17:59.649141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.385 [2024-12-05 11:17:59.649223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c1260 with addr=10.0.0.2, port=8010 00:34:35.385 [2024-12-05 11:17:59.649245] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:35.385 [2024-12-05 11:17:59.649254] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:35.385 [2024-12-05 11:17:59.649263] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:36.322 [2024-12-05 11:18:00.649156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:36.322 [2024-12-05 11:18:00.649228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1751640 with addr=10.0.0.2, port=8010 00:34:36.322 [2024-12-05 11:18:00.649253] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:36.322 [2024-12-05 11:18:00.649265] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:36.322 [2024-12-05 11:18:00.649275] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:37.258 [2024-12-05 11:18:01.649019] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:37.259 2024/12/05 11:18:01 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:34:37.259 request: 00:34:37.259 { 00:34:37.259 "method": "bdev_nvme_start_discovery", 00:34:37.259 "params": { 00:34:37.259 "name": "nvme_second", 00:34:37.259 "trtype": "tcp", 00:34:37.259 "traddr": "10.0.0.2", 00:34:37.259 "adrfam": "ipv4", 00:34:37.259 "trsvcid": "8010", 00:34:37.259 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:37.259 "wait_for_attach": false, 00:34:37.259 "attach_timeout_ms": 3000 00:34:37.259 } 00:34:37.259 } 00:34:37.259 Got JSON-RPC error response 00:34:37.259 GoRPCClient: error on JSON-RPC call 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89715 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:37.259 rmmod nvme_tcp 00:34:37.259 rmmod nvme_fabrics 00:34:37.259 rmmod nvme_keyring 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 89664 ']' 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 89664 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 89664 ']' 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 89664 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89664 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:37.259 killing process with pid 89664 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89664' 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 89664 00:34:37.259 11:18:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 89664 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:34:37.518 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:34:37.776 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:37.776 00:34:37.777 real 0m10.834s 00:34:37.777 user 0m19.814s 00:34:37.777 sys 0m2.502s 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:37.777 ************************************ 00:34:37.777 END TEST nvmf_host_discovery 00:34:37.777 ************************************ 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.777 ************************************ 00:34:37.777 START TEST nvmf_host_multipath_status 00:34:37.777 ************************************ 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:37.777 * Looking for test storage... 00:34:37.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.777 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:38.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.037 --rc genhtml_branch_coverage=1 00:34:38.037 --rc genhtml_function_coverage=1 00:34:38.037 --rc genhtml_legend=1 00:34:38.037 --rc geninfo_all_blocks=1 00:34:38.037 --rc geninfo_unexecuted_blocks=1 00:34:38.037 00:34:38.037 ' 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:38.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.037 --rc genhtml_branch_coverage=1 00:34:38.037 --rc genhtml_function_coverage=1 00:34:38.037 --rc genhtml_legend=1 00:34:38.037 --rc geninfo_all_blocks=1 00:34:38.037 --rc geninfo_unexecuted_blocks=1 00:34:38.037 00:34:38.037 ' 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:38.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.037 --rc genhtml_branch_coverage=1 00:34:38.037 --rc genhtml_function_coverage=1 00:34:38.037 --rc genhtml_legend=1 00:34:38.037 --rc geninfo_all_blocks=1 00:34:38.037 --rc geninfo_unexecuted_blocks=1 00:34:38.037 00:34:38.037 ' 00:34:38.037 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:38.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.037 --rc genhtml_branch_coverage=1 00:34:38.037 --rc genhtml_function_coverage=1 00:34:38.037 --rc genhtml_legend=1 00:34:38.037 --rc geninfo_all_blocks=1 00:34:38.037 --rc geninfo_unexecuted_blocks=1 00:34:38.037 00:34:38.038 ' 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:38.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@280 -- # nvmf_veth_init 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@223 -- # create_target_ns 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:38.038 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # create_main_bridge 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@105 -- # delete_main_bridge 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator0 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target0 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0 up 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target0_br 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target0 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:34:38.039 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:34:38.040 10.0.0.1 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:34:38.040 10.0.0.2 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator0 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target0_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator1 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:34:38.040 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:34:38.041 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:34:38.041 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:34:38.041 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target1 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1 up 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target1_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target1 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772163 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:34:38.302 10.0.0.3 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772164 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:34:38.302 10.0.0.4 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator1 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target1_br 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 2 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:34:38.302 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:38.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:34:38.303 00:34:38.303 --- 10.0.0.1 ping statistics --- 00:34:38.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.303 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:38.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:34:38.303 00:34:38.303 --- 10.0.0.2 ping statistics --- 00:34:38.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.303 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:34:38.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:38.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:34:38.303 00:34:38.303 --- 10.0.0.3 ping statistics --- 00:34:38.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.303 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:34:38.303 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:38.303 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:34:38.303 00:34:38.303 --- 10.0.0.4 ping statistics --- 00:34:38.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.303 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:34:38.303 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # return 0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:34:38.304 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:34:38.564 ' 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:38.564 11:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=90230 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 90230 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90230 ']' 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.564 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:38.564 [2024-12-05 11:18:03.076666] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:38.564 [2024-12-05 11:18:03.077236] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.825 [2024-12-05 11:18:03.224544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:38.825 [2024-12-05 11:18:03.290167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.825 [2024-12-05 11:18:03.290223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.825 [2024-12-05 11:18:03.290238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.825 [2024-12-05 11:18:03.290252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.825 [2024-12-05 11:18:03.290263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.825 [2024-12-05 11:18:03.291271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.825 [2024-12-05 11:18:03.291278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90230 00:34:38.825 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:39.392 [2024-12-05 11:18:03.738858] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.392 11:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:39.650 Malloc0 00:34:39.650 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:40.037 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:40.037 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.296 [2024-12-05 11:18:04.756364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.296 11:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:40.555 [2024-12-05 11:18:05.056542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90321 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90321 /var/tmp/bdevperf.sock 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90321 ']' 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.555 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:40.812 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.812 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:40.812 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:41.069 11:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:41.634 Nvme0n1 00:34:41.634 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:41.891 Nvme0n1 00:34:41.891 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:41.891 11:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:44.420 11:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:44.420 11:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:44.420 11:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:44.420 11:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:45.351 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:45.351 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:45.351 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.351 11:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.915 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.479 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.479 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.479 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:46.479 11:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.479 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.479 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:46.479 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.479 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:46.736 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.736 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:46.736 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.736 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:46.994 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.994 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:46.995 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:47.253 11:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:47.510 11:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:48.445 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:48.445 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:48.445 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.445 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:49.012 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.270 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.270 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:49.270 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:49.270 11:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.528 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.528 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:49.528 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:49.528 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.095 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.095 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:50.095 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.095 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:50.095 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.095 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:50.095 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:50.354 11:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:50.612 11:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:51.560 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:51.560 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:51.560 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.560 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:51.818 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.818 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:51.818 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:51.818 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.075 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.075 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.075 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:52.075 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.333 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.333 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:52.333 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.333 11:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:52.591 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.591 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:52.591 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.591 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:52.849 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.849 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:52.849 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:52.850 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.107 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.107 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:53.107 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:53.364 11:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:53.622 11:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:54.553 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:54.553 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:54.553 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.553 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:54.810 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.810 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:54.810 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:54.810 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.377 11:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.635 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.635 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:55.635 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.635 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.892 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.892 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:55.892 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.892 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:56.150 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:56.150 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:56.150 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:56.409 11:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:56.667 11:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:57.633 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:57.633 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:57.633 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.633 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.200 11:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:58.768 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.768 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:58.768 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.768 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.028 11:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:59.593 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:59.593 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:59.593 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:59.851 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:00.110 11:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:01.487 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:01.487 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:01.487 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.487 11:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:01.487 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:01.487 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:01.487 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.487 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:01.745 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.745 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:01.745 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.745 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:02.004 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.004 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:02.004 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.004 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:02.263 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.263 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:02.263 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.263 11:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:02.522 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.522 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:02.523 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.523 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:02.781 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.781 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:03.039 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:03.039 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:03.297 11:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:03.554 11:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:04.951 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:04.952 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:04.952 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.952 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:04.952 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.952 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:04.952 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.952 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.221 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.221 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.221 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.221 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:05.479 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.479 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:05.479 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.479 11:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:05.735 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.735 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:05.735 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.735 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:05.993 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.993 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:05.993 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.993 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.252 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.252 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:06.252 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:06.509 11:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:06.767 11:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:07.703 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:07.703 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:07.703 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.703 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:07.961 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:07.961 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:07.961 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.961 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.218 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.218 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.218 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.218 11:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.476 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.476 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.476 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.476 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.734 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.734 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:08.734 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.734 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.992 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.992 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:08.992 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.992 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.250 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.250 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:09.250 11:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:09.509 11:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:09.766 11:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:10.701 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:10.701 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:10.701 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.701 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:10.957 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.957 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:10.957 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.957 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.215 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.215 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.215 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:11.215 11:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.473 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.473 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:11.473 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.473 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:11.755 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.755 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:11.755 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.755 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:12.321 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.322 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:12.322 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:12.322 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.322 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.322 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:12.322 11:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:12.580 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:12.866 11:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:13.837 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:13.837 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.837 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:13.837 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.096 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.096 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:14.096 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.096 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:14.354 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:14.354 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:14.354 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.354 11:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:14.612 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.612 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:14.612 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.612 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:14.871 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.871 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:14.871 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:14.871 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.438 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:15.438 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:15.438 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.438 11:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90321 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90321 ']' 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90321 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90321 00:35:15.438 killing process with pid 90321 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90321' 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90321 00:35:15.438 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90321 00:35:15.438 { 00:35:15.438 "results": [ 00:35:15.438 { 00:35:15.438 "job": "Nvme0n1", 00:35:15.438 "core_mask": "0x4", 00:35:15.438 "workload": "verify", 00:35:15.438 "status": "terminated", 00:35:15.438 "verify_range": { 00:35:15.438 "start": 0, 00:35:15.438 "length": 16384 00:35:15.438 }, 00:35:15.438 "queue_depth": 128, 00:35:15.438 "io_size": 4096, 00:35:15.438 "runtime": 33.498695, 00:35:15.438 "iops": 9924.088087610577, 00:35:15.438 "mibps": 38.765969092228815, 00:35:15.438 "io_failed": 0, 00:35:15.438 "io_timeout": 0, 00:35:15.438 "avg_latency_us": 12876.080336394643, 00:35:15.438 "min_latency_us": 111.17714285714285, 00:35:15.438 "max_latency_us": 4026531.84 00:35:15.438 } 00:35:15.438 ], 00:35:15.438 "core_count": 1 00:35:15.438 } 00:35:15.699 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90321 00:35:15.699 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:35:15.699 [2024-12-05 11:18:05.124128] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:35:15.699 [2024-12-05 11:18:05.124215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90321 ] 00:35:15.699 [2024-12-05 11:18:05.271040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.699 [2024-12-05 11:18:05.335829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:15.699 Running I/O for 90 seconds... 00:35:15.699 10654.00 IOPS, 41.62 MiB/s [2024-12-05T11:18:40.351Z] 10635.00 IOPS, 41.54 MiB/s [2024-12-05T11:18:40.352Z] 10333.33 IOPS, 40.36 MiB/s [2024-12-05T11:18:40.352Z] 10028.75 IOPS, 39.17 MiB/s [2024-12-05T11:18:40.352Z] 10085.00 IOPS, 39.39 MiB/s [2024-12-05T11:18:40.352Z] 10115.17 IOPS, 39.51 MiB/s [2024-12-05T11:18:40.352Z] 10086.29 IOPS, 39.40 MiB/s [2024-12-05T11:18:40.352Z] 10078.50 IOPS, 39.37 MiB/s [2024-12-05T11:18:40.352Z] 10094.56 IOPS, 39.43 MiB/s [2024-12-05T11:18:40.352Z] 10141.30 IOPS, 39.61 MiB/s [2024-12-05T11:18:40.352Z] 10185.18 IOPS, 39.79 MiB/s [2024-12-05T11:18:40.352Z] 10218.33 IOPS, 39.92 MiB/s [2024-12-05T11:18:40.352Z] 10258.00 IOPS, 40.07 MiB/s [2024-12-05T11:18:40.352Z] 10276.43 IOPS, 40.14 MiB/s [2024-12-05T11:18:40.352Z] [2024-12-05 11:18:20.931725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.931789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.931837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.931853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.931872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.931886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.931904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.931917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.931936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.931949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.931968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.931980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.931999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.932370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.932384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.933445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.933479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.933512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.933545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.933577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.700 [2024-12-05 11:18:20.933628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.933981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.933995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.934014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.934028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.934054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.934067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.700 [2024-12-05 11:18:20.934086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.700 [2024-12-05 11:18:20.934100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.701 [2024-12-05 11:18:20.934799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.934978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.934999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.701 [2024-12-05 11:18:20.935475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:15.701 [2024-12-05 11:18:20.935496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.935509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.935544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.935578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.935623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.935658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.935929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.935942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.936061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.702 [2024-12-05 11:18:20.936100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.936966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.936990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.702 [2024-12-05 11:18:20.937003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:15.702 [2024-12-05 11:18:20.937027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:20.937580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:20.937609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:15.703 9858.20 IOPS, 38.51 MiB/s [2024-12-05T11:18:40.355Z] 9242.06 IOPS, 36.10 MiB/s [2024-12-05T11:18:40.355Z] 8698.41 IOPS, 33.98 MiB/s [2024-12-05T11:18:40.355Z] 8215.17 IOPS, 32.09 MiB/s [2024-12-05T11:18:40.355Z] 8107.37 IOPS, 31.67 MiB/s [2024-12-05T11:18:40.355Z] 8198.35 IOPS, 32.02 MiB/s [2024-12-05T11:18:40.355Z] 8276.33 IOPS, 32.33 MiB/s [2024-12-05T11:18:40.355Z] 8462.55 IOPS, 33.06 MiB/s [2024-12-05T11:18:40.355Z] 8719.39 IOPS, 34.06 MiB/s [2024-12-05T11:18:40.355Z] 8925.50 IOPS, 34.87 MiB/s [2024-12-05T11:18:40.355Z] 9076.60 IOPS, 35.46 MiB/s [2024-12-05T11:18:40.355Z] 9172.85 IOPS, 35.83 MiB/s [2024-12-05T11:18:40.355Z] 9255.56 IOPS, 36.15 MiB/s [2024-12-05T11:18:40.355Z] 9348.07 IOPS, 36.52 MiB/s [2024-12-05T11:18:40.355Z] 9543.66 IOPS, 37.28 MiB/s [2024-12-05T11:18:40.355Z] 9697.77 IOPS, 37.88 MiB/s [2024-12-05T11:18:40.355Z] [2024-12-05 11:18:37.383859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.383925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.383971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.384035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.384059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.384097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:37.384112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:37.386756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:37.386797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.703 [2024-12-05 11:18:37.386833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.386867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.386901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.386935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.386969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.386989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:15.703 [2024-12-05 11:18:37.387311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.703 [2024-12-05 11:18:37.387325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.704 [2024-12-05 11:18:37.387359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.704 [2024-12-05 11:18:37.387394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.704 [2024-12-05 11:18:37.387428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.704 [2024-12-05 11:18:37.387465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.704 [2024-12-05 11:18:37.387500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.704 [2024-12-05 11:18:37.387541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.704 [2024-12-05 11:18:37.387576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.704 [2024-12-05 11:18:37.387623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:15.704 [2024-12-05 11:18:37.387643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.704 [2024-12-05 11:18:37.387657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:15.704 9798.87 IOPS, 38.28 MiB/s [2024-12-05T11:18:40.356Z] 9854.69 IOPS, 38.49 MiB/s [2024-12-05T11:18:40.356Z] 9902.70 IOPS, 38.68 MiB/s [2024-12-05T11:18:40.356Z] Received shutdown signal, test time was about 33.499359 seconds 00:35:15.704 00:35:15.704 Latency(us) 00:35:15.704 [2024-12-05T11:18:40.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.704 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:15.704 Verification LBA range: start 0x0 length 0x4000 00:35:15.704 Nvme0n1 : 33.50 9924.09 38.77 0.00 0.00 12876.08 111.18 4026531.84 00:35:15.704 [2024-12-05T11:18:40.356Z] =================================================================================================================== 00:35:15.704 [2024-12-05T11:18:40.356Z] Total : 9924.09 38.77 0.00 0.00 12876.08 111.18 4026531.84 00:35:15.704 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:15.963 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:15.963 rmmod nvme_tcp 00:35:15.963 rmmod nvme_fabrics 00:35:15.963 rmmod nvme_keyring 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 90230 ']' 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 90230 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90230 ']' 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90230 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90230 00:35:16.222 killing process with pid 90230 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90230' 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90230 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90230 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:16.222 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:35:16.481 11:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:35:16.481 ************************************ 00:35:16.481 END TEST nvmf_host_multipath_status 00:35:16.481 ************************************ 00:35:16.481 00:35:16.481 real 0m38.788s 00:35:16.481 user 2m3.491s 00:35:16.481 sys 0m12.458s 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.481 ************************************ 00:35:16.481 START TEST nvmf_discovery_remove_ifc 00:35:16.481 ************************************ 00:35:16.481 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:16.741 * Looking for test storage... 00:35:16.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:16.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.741 --rc genhtml_branch_coverage=1 00:35:16.741 --rc genhtml_function_coverage=1 00:35:16.741 --rc genhtml_legend=1 00:35:16.741 --rc geninfo_all_blocks=1 00:35:16.741 --rc geninfo_unexecuted_blocks=1 00:35:16.741 00:35:16.741 ' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:16.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.741 --rc genhtml_branch_coverage=1 00:35:16.741 --rc genhtml_function_coverage=1 00:35:16.741 --rc genhtml_legend=1 00:35:16.741 --rc geninfo_all_blocks=1 00:35:16.741 --rc geninfo_unexecuted_blocks=1 00:35:16.741 00:35:16.741 ' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:16.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.741 --rc genhtml_branch_coverage=1 00:35:16.741 --rc genhtml_function_coverage=1 00:35:16.741 --rc genhtml_legend=1 00:35:16.741 --rc geninfo_all_blocks=1 00:35:16.741 --rc geninfo_unexecuted_blocks=1 00:35:16.741 00:35:16.741 ' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:16.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.741 --rc genhtml_branch_coverage=1 00:35:16.741 --rc genhtml_function_coverage=1 00:35:16.741 --rc genhtml_legend=1 00:35:16.741 --rc geninfo_all_blocks=1 00:35:16.741 --rc geninfo_unexecuted_blocks=1 00:35:16.741 00:35:16.741 ' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.741 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:16.742 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@280 -- # nvmf_veth_init 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@223 -- # create_target_ns 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # create_main_bridge 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@105 -- # delete_main_bridge 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:35:16.742 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator0 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target0 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:35:16.743 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0 up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target0_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target0 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:35:17.082 10.0.0.1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:35:17.082 10.0.0.2 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator0 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target0_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1 up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target1_br 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772163 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:35:17.082 10.0.0.3 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772164 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:35:17.082 10.0.0.4 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator1 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:35:17.082 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target1_br 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 2 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:17.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:35:17.083 00:35:17.083 --- 10.0.0.1 ping statistics --- 00:35:17.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.083 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:35:17.083 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:17.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:35:17.342 00:35:17.342 --- 10.0.0.2 ping statistics --- 00:35:17.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.342 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:35:17.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:17.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:35:17.342 00:35:17.342 --- 10.0.0.3 ping statistics --- 00:35:17.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.342 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:35:17.342 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:35:17.343 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:17.343 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:35:17.343 00:35:17.343 --- 10.0.0.4 ping statistics --- 00:35:17.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.343 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # return 0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:35:17.343 ' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=91666 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 91666 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91666 ']' 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.343 11:18:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.343 [2024-12-05 11:18:41.974469] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:35:17.343 [2024-12-05 11:18:41.974792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:17.601 [2024-12-05 11:18:42.136411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.601 [2024-12-05 11:18:42.193158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:17.601 [2024-12-05 11:18:42.193219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:17.601 [2024-12-05 11:18:42.193235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:17.601 [2024-12-05 11:18:42.193248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:17.601 [2024-12-05 11:18:42.193259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:17.601 [2024-12-05 11:18:42.193647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.860 [2024-12-05 11:18:42.376188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.860 [2024-12-05 11:18:42.384379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:17.860 null0 00:35:17.860 [2024-12-05 11:18:42.416290] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91703 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:17.860 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91703 /tmp/host.sock 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91703 ']' 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.860 11:18:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:17.860 [2024-12-05 11:18:42.501084] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:35:17.860 [2024-12-05 11:18:42.501184] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91703 ] 00:35:18.118 [2024-12-05 11:18:42.661426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.118 [2024-12-05 11:18:42.727778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.054 11:18:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.990 [2024-12-05 11:18:44.619093] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:19.990 [2024-12-05 11:18:44.619125] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:19.990 [2024-12-05 11:18:44.619138] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:20.249 [2024-12-05 11:18:44.705218] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:20.249 [2024-12-05 11:18:44.759704] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:20.249 [2024-12-05 11:18:44.760622] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5e0010:1 started. 00:35:20.249 [2024-12-05 11:18:44.762259] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:20.249 [2024-12-05 11:18:44.762307] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:20.249 [2024-12-05 11:18:44.762332] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:20.249 [2024-12-05 11:18:44.762347] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:20.249 [2024-12-05 11:18:44.762370] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:20.249 [2024-12-05 11:18:44.767756] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5e0010 was disconnected and freed. delete nvme_qpair. 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev target0 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_ns_spdk ip link set target0 down 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:20.249 11:18:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:21.626 11:18:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:22.560 11:18:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:23.493 11:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:23.493 11:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:23.493 11:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.493 11:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:23.493 11:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:23.493 11:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:23.493 11:18:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:23.493 11:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.493 11:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:23.493 11:18:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:24.428 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.687 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:24.687 11:18:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:25.621 11:18:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:25.621 [2024-12-05 11:18:50.190532] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:25.621 [2024-12-05 11:18:50.190594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:25.621 [2024-12-05 11:18:50.190608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:25.621 [2024-12-05 11:18:50.190621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:25.621 [2024-12-05 11:18:50.190630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:25.621 [2024-12-05 11:18:50.190640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:25.621 [2024-12-05 11:18:50.190649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:25.621 [2024-12-05 11:18:50.190659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:25.621 [2024-12-05 11:18:50.190668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:25.621 [2024-12-05 11:18:50.190677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:25.621 [2024-12-05 11:18:50.190686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:25.621 [2024-12-05 11:18:50.190695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54a4c0 is same with the state(6) to be set 00:35:25.621 [2024-12-05 11:18:50.200529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54a4c0 (9): Bad file descriptor 00:35:25.621 [2024-12-05 11:18:50.210546] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:25.621 [2024-12-05 11:18:50.210563] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:25.621 [2024-12-05 11:18:50.210569] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:25.621 [2024-12-05 11:18:50.210575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:25.621 [2024-12-05 11:18:50.210612] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:26.554 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:26.554 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.554 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:26.554 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.554 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.554 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:26.554 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:26.811 [2024-12-05 11:18:51.227676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:26.811 [2024-12-05 11:18:51.227815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54a4c0 with addr=10.0.0.2, port=4420 00:35:26.811 [2024-12-05 11:18:51.227862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54a4c0 is same with the state(6) to be set 00:35:26.811 [2024-12-05 11:18:51.227942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54a4c0 (9): Bad file descriptor 00:35:26.811 [2024-12-05 11:18:51.229395] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:26.811 [2024-12-05 11:18:51.230342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:26.811 [2024-12-05 11:18:51.230929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:26.811 [2024-12-05 11:18:51.231127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:26.811 [2024-12-05 11:18:51.231537] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:26.811 [2024-12-05 11:18:51.231585] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:26.811 [2024-12-05 11:18:51.231647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:26.812 [2024-12-05 11:18:51.231935] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:26.812 [2024-12-05 11:18:51.231965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:26.812 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.812 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:26.812 11:18:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:27.745 [2024-12-05 11:18:52.232282] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:27.745 [2024-12-05 11:18:52.232335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:27.745 [2024-12-05 11:18:52.232364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:27.745 [2024-12-05 11:18:52.232375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:27.745 [2024-12-05 11:18:52.232386] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:27.745 [2024-12-05 11:18:52.232396] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:27.745 [2024-12-05 11:18:52.232403] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:27.745 [2024-12-05 11:18:52.232409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:27.745 [2024-12-05 11:18:52.232445] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:27.745 [2024-12-05 11:18:52.232495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.745 [2024-12-05 11:18:52.232510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.745 [2024-12-05 11:18:52.232524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.745 [2024-12-05 11:18:52.232534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.745 [2024-12-05 11:18:52.232544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.745 [2024-12-05 11:18:52.232553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.745 [2024-12-05 11:18:52.232563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.745 [2024-12-05 11:18:52.232573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.745 [2024-12-05 11:18:52.232584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.745 [2024-12-05 11:18:52.232603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.745 [2024-12-05 11:18:52.232613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:27.745 [2024-12-05 11:18:52.232661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54a140 (9): Bad file descriptor 00:35:27.745 [2024-12-05 11:18:52.233642] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:27.745 [2024-12-05 11:18:52.233662] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:27.745 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.745 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.745 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.745 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.745 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.745 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.745 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:27.746 11:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:29.138 11:18:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:29.707 [2024-12-05 11:18:54.240549] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:29.707 [2024-12-05 11:18:54.240585] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:29.707 [2024-12-05 11:18:54.240606] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:29.707 [2024-12-05 11:18:54.328648] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:29.966 [2024-12-05 11:18:54.390036] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:29.966 [2024-12-05 11:18:54.390600] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x5b7370:1 started. 00:35:29.966 [2024-12-05 11:18:54.391633] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:29.966 [2024-12-05 11:18:54.391674] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:29.966 [2024-12-05 11:18:54.391692] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:29.966 [2024-12-05 11:18:54.391708] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:29.966 [2024-12-05 11:18:54.391717] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:29.966 [2024-12-05 11:18:54.399267] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x5b7370 was disconnected and freed. delete nvme_qpair. 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91703 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91703 ']' 00:35:29.966 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91703 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91703 00:35:29.967 killing process with pid 91703 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91703' 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91703 00:35:29.967 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91703 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:30.226 rmmod nvme_tcp 00:35:30.226 rmmod nvme_fabrics 00:35:30.226 rmmod nvme_keyring 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 91666 ']' 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 91666 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91666 ']' 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91666 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91666 00:35:30.226 killing process with pid 91666 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91666' 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91666 00:35:30.226 11:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91666 00:35:30.491 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:35:30.492 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:35:30.759 00:35:30.759 real 0m14.105s 00:35:30.759 user 0m24.454s 00:35:30.759 sys 0m2.601s 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.759 ************************************ 00:35:30.759 END TEST nvmf_discovery_remove_ifc 00:35:30.759 ************************************ 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.759 ************************************ 00:35:30.759 START TEST nvmf_identify_kernel_target 00:35:30.759 ************************************ 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:30.759 * Looking for test storage... 00:35:30.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:30.759 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:31.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.020 --rc genhtml_branch_coverage=1 00:35:31.020 --rc genhtml_function_coverage=1 00:35:31.020 --rc genhtml_legend=1 00:35:31.020 --rc geninfo_all_blocks=1 00:35:31.020 --rc geninfo_unexecuted_blocks=1 00:35:31.020 00:35:31.020 ' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:31.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.020 --rc genhtml_branch_coverage=1 00:35:31.020 --rc genhtml_function_coverage=1 00:35:31.020 --rc genhtml_legend=1 00:35:31.020 --rc geninfo_all_blocks=1 00:35:31.020 --rc geninfo_unexecuted_blocks=1 00:35:31.020 00:35:31.020 ' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:31.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.020 --rc genhtml_branch_coverage=1 00:35:31.020 --rc genhtml_function_coverage=1 00:35:31.020 --rc genhtml_legend=1 00:35:31.020 --rc geninfo_all_blocks=1 00:35:31.020 --rc geninfo_unexecuted_blocks=1 00:35:31.020 00:35:31.020 ' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:31.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.020 --rc genhtml_branch_coverage=1 00:35:31.020 --rc genhtml_function_coverage=1 00:35:31.020 --rc genhtml_legend=1 00:35:31.020 --rc geninfo_all_blocks=1 00:35:31.020 --rc geninfo_unexecuted_blocks=1 00:35:31.020 00:35:31.020 ' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.020 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:31.021 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@223 -- # create_target_ns 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target0 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:35:31.021 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:31.022 10.0.0.1 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:35:31.022 10.0.0.2 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:35:31.022 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target1 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772163 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:35:31.282 10.0.0.3 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772164 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:35:31.282 10.0.0.4 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:35:31.282 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:31.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:35:31.283 00:35:31.283 --- 10.0.0.1 ping statistics --- 00:35:31.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.283 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:31.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:35:31.283 00:35:31.283 --- 10.0.0.2 ping statistics --- 00:35:31.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.283 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:35:31.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:31.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:35:31.283 00:35:31.283 --- 10.0.0.3 ping statistics --- 00:35:31.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.283 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:35:31.283 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:35:31.284 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:31.284 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:35:31.284 00:35:31.284 --- 10.0.0.4 ping statistics --- 00:35:31.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.284 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # return 0 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:31.284 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:35:31.542 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:35:31.543 ' 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:31.543 11:18:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:31.543 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:31.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:32.059 Waiting for block devices as requested 00:35:32.059 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:32.059 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:32.318 No valid GPT data, bailing 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:35:32.318 No valid GPT data, bailing 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:35:32.318 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:35:32.318 No valid GPT data, bailing 00:35:32.319 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:35:32.319 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:32.319 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:32.319 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:35:32.319 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:32.319 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:32.578 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:35:32.578 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:35:32.578 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:32.578 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:32.578 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:35:32.578 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:35:32.578 11:18:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:32.578 No valid GPT data, bailing 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:32.578 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -a 10.0.0.1 -t tcp -s 4420 00:35:32.578 00:35:32.578 Discovery Log Number of Records 2, Generation counter 2 00:35:32.578 =====Discovery Log Entry 0====== 00:35:32.578 trtype: tcp 00:35:32.578 adrfam: ipv4 00:35:32.578 subtype: current discovery subsystem 00:35:32.578 treq: not specified, sq flow control disable supported 00:35:32.578 portid: 1 00:35:32.578 trsvcid: 4420 00:35:32.578 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:32.578 traddr: 10.0.0.1 00:35:32.578 eflags: none 00:35:32.578 sectype: none 00:35:32.578 =====Discovery Log Entry 1====== 00:35:32.578 trtype: tcp 00:35:32.578 adrfam: ipv4 00:35:32.578 subtype: nvme subsystem 00:35:32.578 treq: not specified, sq flow control disable supported 00:35:32.578 portid: 1 00:35:32.578 trsvcid: 4420 00:35:32.578 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:32.578 traddr: 10.0.0.1 00:35:32.578 eflags: none 00:35:32.578 sectype: none 00:35:32.579 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:32.579 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:32.838 ===================================================== 00:35:32.838 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:32.838 ===================================================== 00:35:32.838 Controller Capabilities/Features 00:35:32.838 ================================ 00:35:32.838 Vendor ID: 0000 00:35:32.838 Subsystem Vendor ID: 0000 00:35:32.838 Serial Number: efdd85580037f5bb9855 00:35:32.838 Model Number: Linux 00:35:32.838 Firmware Version: 6.8.9-20 00:35:32.838 Recommended Arb Burst: 0 00:35:32.838 IEEE OUI Identifier: 00 00 00 00:35:32.838 Multi-path I/O 00:35:32.838 May have multiple subsystem ports: No 00:35:32.838 May have multiple controllers: No 00:35:32.838 Associated with SR-IOV VF: No 00:35:32.838 Max Data Transfer Size: Unlimited 00:35:32.838 Max Number of Namespaces: 0 00:35:32.838 Max Number of I/O Queues: 1024 00:35:32.838 NVMe Specification Version (VS): 1.3 00:35:32.838 NVMe Specification Version (Identify): 1.3 00:35:32.838 Maximum Queue Entries: 1024 00:35:32.838 Contiguous Queues Required: No 00:35:32.838 Arbitration Mechanisms Supported 00:35:32.838 Weighted Round Robin: Not Supported 00:35:32.838 Vendor Specific: Not Supported 00:35:32.838 Reset Timeout: 7500 ms 00:35:32.838 Doorbell Stride: 4 bytes 00:35:32.838 NVM Subsystem Reset: Not Supported 00:35:32.838 Command Sets Supported 00:35:32.838 NVM Command Set: Supported 00:35:32.838 Boot Partition: Not Supported 00:35:32.838 Memory Page Size Minimum: 4096 bytes 00:35:32.838 Memory Page Size Maximum: 4096 bytes 00:35:32.838 Persistent Memory Region: Not Supported 00:35:32.838 Optional Asynchronous Events Supported 00:35:32.838 Namespace Attribute Notices: Not Supported 00:35:32.838 Firmware Activation Notices: Not Supported 00:35:32.838 ANA Change Notices: Not Supported 00:35:32.838 PLE Aggregate Log Change Notices: Not Supported 00:35:32.838 LBA Status Info Alert Notices: Not Supported 00:35:32.838 EGE Aggregate Log Change Notices: Not Supported 00:35:32.838 Normal NVM Subsystem Shutdown event: Not Supported 00:35:32.838 Zone Descriptor Change Notices: Not Supported 00:35:32.838 Discovery Log Change Notices: Supported 00:35:32.838 Controller Attributes 00:35:32.838 128-bit Host Identifier: Not Supported 00:35:32.838 Non-Operational Permissive Mode: Not Supported 00:35:32.838 NVM Sets: Not Supported 00:35:32.838 Read Recovery Levels: Not Supported 00:35:32.838 Endurance Groups: Not Supported 00:35:32.838 Predictable Latency Mode: Not Supported 00:35:32.838 Traffic Based Keep ALive: Not Supported 00:35:32.838 Namespace Granularity: Not Supported 00:35:32.838 SQ Associations: Not Supported 00:35:32.838 UUID List: Not Supported 00:35:32.838 Multi-Domain Subsystem: Not Supported 00:35:32.838 Fixed Capacity Management: Not Supported 00:35:32.838 Variable Capacity Management: Not Supported 00:35:32.838 Delete Endurance Group: Not Supported 00:35:32.838 Delete NVM Set: Not Supported 00:35:32.838 Extended LBA Formats Supported: Not Supported 00:35:32.838 Flexible Data Placement Supported: Not Supported 00:35:32.838 00:35:32.838 Controller Memory Buffer Support 00:35:32.838 ================================ 00:35:32.838 Supported: No 00:35:32.838 00:35:32.838 Persistent Memory Region Support 00:35:32.838 ================================ 00:35:32.838 Supported: No 00:35:32.838 00:35:32.838 Admin Command Set Attributes 00:35:32.838 ============================ 00:35:32.838 Security Send/Receive: Not Supported 00:35:32.838 Format NVM: Not Supported 00:35:32.838 Firmware Activate/Download: Not Supported 00:35:32.838 Namespace Management: Not Supported 00:35:32.838 Device Self-Test: Not Supported 00:35:32.838 Directives: Not Supported 00:35:32.838 NVMe-MI: Not Supported 00:35:32.838 Virtualization Management: Not Supported 00:35:32.838 Doorbell Buffer Config: Not Supported 00:35:32.838 Get LBA Status Capability: Not Supported 00:35:32.838 Command & Feature Lockdown Capability: Not Supported 00:35:32.838 Abort Command Limit: 1 00:35:32.838 Async Event Request Limit: 1 00:35:32.838 Number of Firmware Slots: N/A 00:35:32.838 Firmware Slot 1 Read-Only: N/A 00:35:32.838 Firmware Activation Without Reset: N/A 00:35:32.838 Multiple Update Detection Support: N/A 00:35:32.838 Firmware Update Granularity: No Information Provided 00:35:32.838 Per-Namespace SMART Log: No 00:35:32.838 Asymmetric Namespace Access Log Page: Not Supported 00:35:32.838 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:32.838 Command Effects Log Page: Not Supported 00:35:32.838 Get Log Page Extended Data: Supported 00:35:32.838 Telemetry Log Pages: Not Supported 00:35:32.838 Persistent Event Log Pages: Not Supported 00:35:32.838 Supported Log Pages Log Page: May Support 00:35:32.838 Commands Supported & Effects Log Page: Not Supported 00:35:32.838 Feature Identifiers & Effects Log Page:May Support 00:35:32.838 NVMe-MI Commands & Effects Log Page: May Support 00:35:32.838 Data Area 4 for Telemetry Log: Not Supported 00:35:32.838 Error Log Page Entries Supported: 1 00:35:32.838 Keep Alive: Not Supported 00:35:32.838 00:35:32.838 NVM Command Set Attributes 00:35:32.838 ========================== 00:35:32.838 Submission Queue Entry Size 00:35:32.838 Max: 1 00:35:32.838 Min: 1 00:35:32.838 Completion Queue Entry Size 00:35:32.838 Max: 1 00:35:32.838 Min: 1 00:35:32.838 Number of Namespaces: 0 00:35:32.838 Compare Command: Not Supported 00:35:32.838 Write Uncorrectable Command: Not Supported 00:35:32.839 Dataset Management Command: Not Supported 00:35:32.839 Write Zeroes Command: Not Supported 00:35:32.839 Set Features Save Field: Not Supported 00:35:32.839 Reservations: Not Supported 00:35:32.839 Timestamp: Not Supported 00:35:32.839 Copy: Not Supported 00:35:32.839 Volatile Write Cache: Not Present 00:35:32.839 Atomic Write Unit (Normal): 1 00:35:32.839 Atomic Write Unit (PFail): 1 00:35:32.839 Atomic Compare & Write Unit: 1 00:35:32.839 Fused Compare & Write: Not Supported 00:35:32.839 Scatter-Gather List 00:35:32.839 SGL Command Set: Supported 00:35:32.839 SGL Keyed: Not Supported 00:35:32.839 SGL Bit Bucket Descriptor: Not Supported 00:35:32.839 SGL Metadata Pointer: Not Supported 00:35:32.839 Oversized SGL: Not Supported 00:35:32.839 SGL Metadata Address: Not Supported 00:35:32.839 SGL Offset: Supported 00:35:32.839 Transport SGL Data Block: Not Supported 00:35:32.839 Replay Protected Memory Block: Not Supported 00:35:32.839 00:35:32.839 Firmware Slot Information 00:35:32.839 ========================= 00:35:32.839 Active slot: 0 00:35:32.839 00:35:32.839 00:35:32.839 Error Log 00:35:32.839 ========= 00:35:32.839 00:35:32.839 Active Namespaces 00:35:32.839 ================= 00:35:32.839 Discovery Log Page 00:35:32.839 ================== 00:35:32.839 Generation Counter: 2 00:35:32.839 Number of Records: 2 00:35:32.839 Record Format: 0 00:35:32.839 00:35:32.839 Discovery Log Entry 0 00:35:32.839 ---------------------- 00:35:32.839 Transport Type: 3 (TCP) 00:35:32.839 Address Family: 1 (IPv4) 00:35:32.839 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:32.839 Entry Flags: 00:35:32.839 Duplicate Returned Information: 0 00:35:32.839 Explicit Persistent Connection Support for Discovery: 0 00:35:32.839 Transport Requirements: 00:35:32.839 Secure Channel: Not Specified 00:35:32.839 Port ID: 1 (0x0001) 00:35:32.839 Controller ID: 65535 (0xffff) 00:35:32.839 Admin Max SQ Size: 32 00:35:32.839 Transport Service Identifier: 4420 00:35:32.839 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:32.839 Transport Address: 10.0.0.1 00:35:32.839 Discovery Log Entry 1 00:35:32.839 ---------------------- 00:35:32.839 Transport Type: 3 (TCP) 00:35:32.839 Address Family: 1 (IPv4) 00:35:32.839 Subsystem Type: 2 (NVM Subsystem) 00:35:32.839 Entry Flags: 00:35:32.839 Duplicate Returned Information: 0 00:35:32.839 Explicit Persistent Connection Support for Discovery: 0 00:35:32.839 Transport Requirements: 00:35:32.839 Secure Channel: Not Specified 00:35:32.839 Port ID: 1 (0x0001) 00:35:32.839 Controller ID: 65535 (0xffff) 00:35:32.839 Admin Max SQ Size: 32 00:35:32.839 Transport Service Identifier: 4420 00:35:32.839 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:32.839 Transport Address: 10.0.0.1 00:35:32.839 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:33.098 get_feature(0x01) failed 00:35:33.098 get_feature(0x02) failed 00:35:33.098 get_feature(0x04) failed 00:35:33.098 ===================================================== 00:35:33.098 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:33.098 ===================================================== 00:35:33.098 Controller Capabilities/Features 00:35:33.098 ================================ 00:35:33.098 Vendor ID: 0000 00:35:33.098 Subsystem Vendor ID: 0000 00:35:33.098 Serial Number: 62bb4ffd099ad0e5eba6 00:35:33.098 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:33.098 Firmware Version: 6.8.9-20 00:35:33.098 Recommended Arb Burst: 6 00:35:33.098 IEEE OUI Identifier: 00 00 00 00:35:33.098 Multi-path I/O 00:35:33.098 May have multiple subsystem ports: Yes 00:35:33.098 May have multiple controllers: Yes 00:35:33.098 Associated with SR-IOV VF: No 00:35:33.098 Max Data Transfer Size: Unlimited 00:35:33.098 Max Number of Namespaces: 1024 00:35:33.098 Max Number of I/O Queues: 128 00:35:33.098 NVMe Specification Version (VS): 1.3 00:35:33.098 NVMe Specification Version (Identify): 1.3 00:35:33.098 Maximum Queue Entries: 1024 00:35:33.098 Contiguous Queues Required: No 00:35:33.098 Arbitration Mechanisms Supported 00:35:33.098 Weighted Round Robin: Not Supported 00:35:33.098 Vendor Specific: Not Supported 00:35:33.098 Reset Timeout: 7500 ms 00:35:33.098 Doorbell Stride: 4 bytes 00:35:33.098 NVM Subsystem Reset: Not Supported 00:35:33.098 Command Sets Supported 00:35:33.098 NVM Command Set: Supported 00:35:33.098 Boot Partition: Not Supported 00:35:33.098 Memory Page Size Minimum: 4096 bytes 00:35:33.098 Memory Page Size Maximum: 4096 bytes 00:35:33.098 Persistent Memory Region: Not Supported 00:35:33.098 Optional Asynchronous Events Supported 00:35:33.098 Namespace Attribute Notices: Supported 00:35:33.098 Firmware Activation Notices: Not Supported 00:35:33.098 ANA Change Notices: Supported 00:35:33.098 PLE Aggregate Log Change Notices: Not Supported 00:35:33.098 LBA Status Info Alert Notices: Not Supported 00:35:33.098 EGE Aggregate Log Change Notices: Not Supported 00:35:33.098 Normal NVM Subsystem Shutdown event: Not Supported 00:35:33.098 Zone Descriptor Change Notices: Not Supported 00:35:33.098 Discovery Log Change Notices: Not Supported 00:35:33.098 Controller Attributes 00:35:33.098 128-bit Host Identifier: Supported 00:35:33.098 Non-Operational Permissive Mode: Not Supported 00:35:33.098 NVM Sets: Not Supported 00:35:33.098 Read Recovery Levels: Not Supported 00:35:33.098 Endurance Groups: Not Supported 00:35:33.098 Predictable Latency Mode: Not Supported 00:35:33.098 Traffic Based Keep ALive: Supported 00:35:33.098 Namespace Granularity: Not Supported 00:35:33.098 SQ Associations: Not Supported 00:35:33.098 UUID List: Not Supported 00:35:33.098 Multi-Domain Subsystem: Not Supported 00:35:33.098 Fixed Capacity Management: Not Supported 00:35:33.098 Variable Capacity Management: Not Supported 00:35:33.098 Delete Endurance Group: Not Supported 00:35:33.098 Delete NVM Set: Not Supported 00:35:33.098 Extended LBA Formats Supported: Not Supported 00:35:33.098 Flexible Data Placement Supported: Not Supported 00:35:33.098 00:35:33.098 Controller Memory Buffer Support 00:35:33.098 ================================ 00:35:33.098 Supported: No 00:35:33.098 00:35:33.098 Persistent Memory Region Support 00:35:33.098 ================================ 00:35:33.098 Supported: No 00:35:33.098 00:35:33.098 Admin Command Set Attributes 00:35:33.098 ============================ 00:35:33.098 Security Send/Receive: Not Supported 00:35:33.098 Format NVM: Not Supported 00:35:33.098 Firmware Activate/Download: Not Supported 00:35:33.098 Namespace Management: Not Supported 00:35:33.098 Device Self-Test: Not Supported 00:35:33.098 Directives: Not Supported 00:35:33.098 NVMe-MI: Not Supported 00:35:33.099 Virtualization Management: Not Supported 00:35:33.099 Doorbell Buffer Config: Not Supported 00:35:33.099 Get LBA Status Capability: Not Supported 00:35:33.099 Command & Feature Lockdown Capability: Not Supported 00:35:33.099 Abort Command Limit: 4 00:35:33.099 Async Event Request Limit: 4 00:35:33.099 Number of Firmware Slots: N/A 00:35:33.099 Firmware Slot 1 Read-Only: N/A 00:35:33.099 Firmware Activation Without Reset: N/A 00:35:33.099 Multiple Update Detection Support: N/A 00:35:33.099 Firmware Update Granularity: No Information Provided 00:35:33.099 Per-Namespace SMART Log: Yes 00:35:33.099 Asymmetric Namespace Access Log Page: Supported 00:35:33.099 ANA Transition Time : 10 sec 00:35:33.099 00:35:33.099 Asymmetric Namespace Access Capabilities 00:35:33.099 ANA Optimized State : Supported 00:35:33.099 ANA Non-Optimized State : Supported 00:35:33.099 ANA Inaccessible State : Supported 00:35:33.099 ANA Persistent Loss State : Supported 00:35:33.099 ANA Change State : Supported 00:35:33.099 ANAGRPID is not changed : No 00:35:33.099 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:33.099 00:35:33.099 ANA Group Identifier Maximum : 128 00:35:33.099 Number of ANA Group Identifiers : 128 00:35:33.099 Max Number of Allowed Namespaces : 1024 00:35:33.099 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:33.099 Command Effects Log Page: Supported 00:35:33.099 Get Log Page Extended Data: Supported 00:35:33.099 Telemetry Log Pages: Not Supported 00:35:33.099 Persistent Event Log Pages: Not Supported 00:35:33.099 Supported Log Pages Log Page: May Support 00:35:33.099 Commands Supported & Effects Log Page: Not Supported 00:35:33.099 Feature Identifiers & Effects Log Page:May Support 00:35:33.099 NVMe-MI Commands & Effects Log Page: May Support 00:35:33.099 Data Area 4 for Telemetry Log: Not Supported 00:35:33.099 Error Log Page Entries Supported: 128 00:35:33.099 Keep Alive: Supported 00:35:33.099 Keep Alive Granularity: 1000 ms 00:35:33.099 00:35:33.099 NVM Command Set Attributes 00:35:33.099 ========================== 00:35:33.099 Submission Queue Entry Size 00:35:33.099 Max: 64 00:35:33.099 Min: 64 00:35:33.099 Completion Queue Entry Size 00:35:33.099 Max: 16 00:35:33.099 Min: 16 00:35:33.099 Number of Namespaces: 1024 00:35:33.099 Compare Command: Not Supported 00:35:33.099 Write Uncorrectable Command: Not Supported 00:35:33.099 Dataset Management Command: Supported 00:35:33.099 Write Zeroes Command: Supported 00:35:33.099 Set Features Save Field: Not Supported 00:35:33.099 Reservations: Not Supported 00:35:33.099 Timestamp: Not Supported 00:35:33.099 Copy: Not Supported 00:35:33.099 Volatile Write Cache: Present 00:35:33.099 Atomic Write Unit (Normal): 1 00:35:33.099 Atomic Write Unit (PFail): 1 00:35:33.099 Atomic Compare & Write Unit: 1 00:35:33.099 Fused Compare & Write: Not Supported 00:35:33.099 Scatter-Gather List 00:35:33.099 SGL Command Set: Supported 00:35:33.099 SGL Keyed: Not Supported 00:35:33.099 SGL Bit Bucket Descriptor: Not Supported 00:35:33.099 SGL Metadata Pointer: Not Supported 00:35:33.099 Oversized SGL: Not Supported 00:35:33.099 SGL Metadata Address: Not Supported 00:35:33.099 SGL Offset: Supported 00:35:33.099 Transport SGL Data Block: Not Supported 00:35:33.099 Replay Protected Memory Block: Not Supported 00:35:33.099 00:35:33.099 Firmware Slot Information 00:35:33.099 ========================= 00:35:33.099 Active slot: 0 00:35:33.099 00:35:33.099 Asymmetric Namespace Access 00:35:33.099 =========================== 00:35:33.099 Change Count : 0 00:35:33.099 Number of ANA Group Descriptors : 1 00:35:33.099 ANA Group Descriptor : 0 00:35:33.099 ANA Group ID : 1 00:35:33.099 Number of NSID Values : 1 00:35:33.099 Change Count : 0 00:35:33.099 ANA State : 1 00:35:33.099 Namespace Identifier : 1 00:35:33.099 00:35:33.099 Commands Supported and Effects 00:35:33.099 ============================== 00:35:33.099 Admin Commands 00:35:33.099 -------------- 00:35:33.099 Get Log Page (02h): Supported 00:35:33.099 Identify (06h): Supported 00:35:33.099 Abort (08h): Supported 00:35:33.099 Set Features (09h): Supported 00:35:33.099 Get Features (0Ah): Supported 00:35:33.099 Asynchronous Event Request (0Ch): Supported 00:35:33.099 Keep Alive (18h): Supported 00:35:33.099 I/O Commands 00:35:33.099 ------------ 00:35:33.099 Flush (00h): Supported 00:35:33.099 Write (01h): Supported LBA-Change 00:35:33.099 Read (02h): Supported 00:35:33.099 Write Zeroes (08h): Supported LBA-Change 00:35:33.099 Dataset Management (09h): Supported 00:35:33.099 00:35:33.099 Error Log 00:35:33.099 ========= 00:35:33.099 Entry: 0 00:35:33.099 Error Count: 0x3 00:35:33.099 Submission Queue Id: 0x0 00:35:33.099 Command Id: 0x5 00:35:33.099 Phase Bit: 0 00:35:33.099 Status Code: 0x2 00:35:33.099 Status Code Type: 0x0 00:35:33.099 Do Not Retry: 1 00:35:33.099 Error Location: 0x28 00:35:33.099 LBA: 0x0 00:35:33.099 Namespace: 0x0 00:35:33.099 Vendor Log Page: 0x0 00:35:33.099 ----------- 00:35:33.099 Entry: 1 00:35:33.099 Error Count: 0x2 00:35:33.099 Submission Queue Id: 0x0 00:35:33.099 Command Id: 0x5 00:35:33.099 Phase Bit: 0 00:35:33.099 Status Code: 0x2 00:35:33.099 Status Code Type: 0x0 00:35:33.099 Do Not Retry: 1 00:35:33.099 Error Location: 0x28 00:35:33.099 LBA: 0x0 00:35:33.099 Namespace: 0x0 00:35:33.099 Vendor Log Page: 0x0 00:35:33.099 ----------- 00:35:33.099 Entry: 2 00:35:33.099 Error Count: 0x1 00:35:33.099 Submission Queue Id: 0x0 00:35:33.099 Command Id: 0x4 00:35:33.099 Phase Bit: 0 00:35:33.099 Status Code: 0x2 00:35:33.099 Status Code Type: 0x0 00:35:33.099 Do Not Retry: 1 00:35:33.099 Error Location: 0x28 00:35:33.099 LBA: 0x0 00:35:33.100 Namespace: 0x0 00:35:33.100 Vendor Log Page: 0x0 00:35:33.100 00:35:33.100 Number of Queues 00:35:33.100 ================ 00:35:33.100 Number of I/O Submission Queues: 128 00:35:33.100 Number of I/O Completion Queues: 128 00:35:33.100 00:35:33.100 ZNS Specific Controller Data 00:35:33.100 ============================ 00:35:33.100 Zone Append Size Limit: 0 00:35:33.100 00:35:33.100 00:35:33.100 Active Namespaces 00:35:33.100 ================= 00:35:33.100 get_feature(0x05) failed 00:35:33.100 Namespace ID:1 00:35:33.100 Command Set Identifier: NVM (00h) 00:35:33.100 Deallocate: Supported 00:35:33.100 Deallocated/Unwritten Error: Not Supported 00:35:33.100 Deallocated Read Value: Unknown 00:35:33.100 Deallocate in Write Zeroes: Not Supported 00:35:33.100 Deallocated Guard Field: 0xFFFF 00:35:33.100 Flush: Supported 00:35:33.100 Reservation: Not Supported 00:35:33.100 Namespace Sharing Capabilities: Multiple Controllers 00:35:33.100 Size (in LBAs): 1310720 (5GiB) 00:35:33.100 Capacity (in LBAs): 1310720 (5GiB) 00:35:33.100 Utilization (in LBAs): 1310720 (5GiB) 00:35:33.100 UUID: 543f8ad8-3777-4fe5-95fe-c8c2906bdbe2 00:35:33.100 Thin Provisioning: Not Supported 00:35:33.100 Per-NS Atomic Units: Yes 00:35:33.100 Atomic Boundary Size (Normal): 0 00:35:33.100 Atomic Boundary Size (PFail): 0 00:35:33.100 Atomic Boundary Offset: 0 00:35:33.100 NGUID/EUI64 Never Reused: No 00:35:33.100 ANA group ID: 1 00:35:33.100 Namespace Write Protected: No 00:35:33.100 Number of LBA Formats: 1 00:35:33.100 Current LBA Format: LBA Format #00 00:35:33.100 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:35:33.100 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:33.100 rmmod nvme_tcp 00:35:33.100 rmmod nvme_fabrics 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:35:33.100 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:33.358 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:35:33.359 11:18:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:33.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:34.184 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:34.184 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:34.184 00:35:34.184 real 0m3.574s 00:35:34.184 user 0m1.283s 00:35:34.184 sys 0m1.793s 00:35:34.184 11:18:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.184 11:18:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.184 ************************************ 00:35:34.184 END TEST nvmf_identify_kernel_target 00:35:34.184 ************************************ 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.443 ************************************ 00:35:34.443 START TEST nvmf_auth_host 00:35:34.443 ************************************ 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:34.443 * Looking for test storage... 00:35:34.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:35:34.443 11:18:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.443 --rc genhtml_branch_coverage=1 00:35:34.443 --rc genhtml_function_coverage=1 00:35:34.443 --rc genhtml_legend=1 00:35:34.443 --rc geninfo_all_blocks=1 00:35:34.443 --rc geninfo_unexecuted_blocks=1 00:35:34.443 00:35:34.443 ' 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.443 --rc genhtml_branch_coverage=1 00:35:34.443 --rc genhtml_function_coverage=1 00:35:34.443 --rc genhtml_legend=1 00:35:34.443 --rc geninfo_all_blocks=1 00:35:34.443 --rc geninfo_unexecuted_blocks=1 00:35:34.443 00:35:34.443 ' 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.443 --rc genhtml_branch_coverage=1 00:35:34.443 --rc genhtml_function_coverage=1 00:35:34.443 --rc genhtml_legend=1 00:35:34.443 --rc geninfo_all_blocks=1 00:35:34.443 --rc geninfo_unexecuted_blocks=1 00:35:34.443 00:35:34.443 ' 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:34.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.443 --rc genhtml_branch_coverage=1 00:35:34.443 --rc genhtml_function_coverage=1 00:35:34.443 --rc genhtml_legend=1 00:35:34.443 --rc geninfo_all_blocks=1 00:35:34.443 --rc geninfo_unexecuted_blocks=1 00:35:34.443 00:35:34.443 ' 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.443 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:34.703 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@223 -- # create_target_ns 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:35:34.703 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:35:34.704 10.0.0.1 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:35:34.704 10.0.0.2 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:34.704 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target1 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:35:34.705 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:35:34.964 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:35:34.964 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:35:34.964 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:34.964 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:35:34.964 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772163 00:35:34.964 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:35:34.964 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:35:34.965 10.0.0.3 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772164 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:35:34.965 10.0.0.4 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:34.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:35:34.965 00:35:34.965 --- 10.0.0.1 ping statistics --- 00:35:34.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.965 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:34.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:35:34.965 00:35:34.965 --- 10.0.0.2 ping statistics --- 00:35:34.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.965 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:34.965 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:35:34.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:34.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:35:34.966 00:35:34.966 --- 10.0.0.3 ping statistics --- 00:35:34.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.966 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:35:34.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:34.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.143 ms 00:35:34.966 00:35:34.966 --- 10.0.0.4 ping statistics --- 00:35:34.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.966 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # return 0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:35:34.966 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:35:35.225 ' 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=92724 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 92724 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92724 ']' 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.225 11:18:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.160 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.161 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:36.161 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:36.161 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:36.161 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=bcf90c30bbc02abbd2cab83dc2bf7722 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Vtc 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key bcf90c30bbc02abbd2cab83dc2bf7722 0 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 bcf90c30bbc02abbd2cab83dc2bf7722 0 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=bcf90c30bbc02abbd2cab83dc2bf7722 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Vtc 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Vtc 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Vtc 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c071bde8e9ca25607a70637fc39d10d9cccabe0a9338efb68917ddc3f85b6510 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Rk9 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c071bde8e9ca25607a70637fc39d10d9cccabe0a9338efb68917ddc3f85b6510 3 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c071bde8e9ca25607a70637fc39d10d9cccabe0a9338efb68917ddc3f85b6510 3 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.424 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c071bde8e9ca25607a70637fc39d10d9cccabe0a9338efb68917ddc3f85b6510 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Rk9 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Rk9 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Rk9 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=67048298a7d1a4ee826cb9a5ea25c92fc9020a965795e4b4 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.TE3 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 67048298a7d1a4ee826cb9a5ea25c92fc9020a965795e4b4 0 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 67048298a7d1a4ee826cb9a5ea25c92fc9020a965795e4b4 0 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=67048298a7d1a4ee826cb9a5ea25c92fc9020a965795e4b4 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:35:36.425 11:19:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.TE3 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.TE3 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.TE3 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=1423ed1ce9b5d943ad158dc45db6705b8ce41066bccd5349 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.zvH 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 1423ed1ce9b5d943ad158dc45db6705b8ce41066bccd5349 2 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 1423ed1ce9b5d943ad158dc45db6705b8ce41066bccd5349 2 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=1423ed1ce9b5d943ad158dc45db6705b8ce41066bccd5349 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:35:36.425 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.zvH 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.zvH 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zvH 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=1ea11c845bcea395dfa331473016addf 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.CXG 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 1ea11c845bcea395dfa331473016addf 1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 1ea11c845bcea395dfa331473016addf 1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=1ea11c845bcea395dfa331473016addf 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.CXG 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.CXG 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.CXG 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=de2f7a7db6d8f8ca57ebe90e90553ea3 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.esU 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key de2f7a7db6d8f8ca57ebe90e90553ea3 1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 de2f7a7db6d8f8ca57ebe90e90553ea3 1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=de2f7a7db6d8f8ca57ebe90e90553ea3 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.esU 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.esU 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.esU 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=9e4581deaa6e6f70535dfb5fc7be5d8625a8ccbf62be6316 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.TK5 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 9e4581deaa6e6f70535dfb5fc7be5d8625a8ccbf62be6316 2 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 9e4581deaa6e6f70535dfb5fc7be5d8625a8ccbf62be6316 2 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=9e4581deaa6e6f70535dfb5fc7be5d8625a8ccbf62be6316 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.TK5 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.TK5 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.TK5 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=e4adc8a1b952905cb645997461faea2b 00:35:36.690 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Axu 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key e4adc8a1b952905cb645997461faea2b 0 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 e4adc8a1b952905cb645997461faea2b 0 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=e4adc8a1b952905cb645997461faea2b 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Axu 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Axu 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Axu 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=66dec34e3b0b2416a8265e8a4ce76f1cadd00ca85958a05834b4216797651683 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.lqy 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 66dec34e3b0b2416a8265e8a4ce76f1cadd00ca85958a05834b4216797651683 3 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 66dec34e3b0b2416a8265e8a4ce76f1cadd00ca85958a05834b4216797651683 3 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=66dec34e3b0b2416a8265e8a4ce76f1cadd00ca85958a05834b4216797651683 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.lqy 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.lqy 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lqy 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92724 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92724 ']' 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.949 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Vtc 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Rk9 ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rk9 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.TE3 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zvH ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zvH 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.CXG 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.esU ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.esU 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.TK5 00:35:37.208 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Axu ]] 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Axu 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lqy 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:37.209 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:37.468 11:19:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:37.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:37.727 Waiting for block devices as requested 00:35:37.985 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:37.985 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:38.553 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:35:38.553 No valid GPT data, bailing 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:35:38.813 No valid GPT data, bailing 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:35:38.813 No valid GPT data, bailing 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:35:38.813 No valid GPT data, bailing 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:35:38.813 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -a 10.0.0.1 -t tcp -s 4420 00:35:39.073 00:35:39.073 Discovery Log Number of Records 2, Generation counter 2 00:35:39.073 =====Discovery Log Entry 0====== 00:35:39.073 trtype: tcp 00:35:39.073 adrfam: ipv4 00:35:39.073 subtype: current discovery subsystem 00:35:39.073 treq: not specified, sq flow control disable supported 00:35:39.073 portid: 1 00:35:39.073 trsvcid: 4420 00:35:39.073 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:39.073 traddr: 10.0.0.1 00:35:39.073 eflags: none 00:35:39.073 sectype: none 00:35:39.073 =====Discovery Log Entry 1====== 00:35:39.073 trtype: tcp 00:35:39.073 adrfam: ipv4 00:35:39.073 subtype: nvme subsystem 00:35:39.073 treq: not specified, sq flow control disable supported 00:35:39.073 portid: 1 00:35:39.073 trsvcid: 4420 00:35:39.073 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:39.073 traddr: 10.0.0.1 00:35:39.073 eflags: none 00:35:39.073 sectype: none 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.073 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.074 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.334 nvme0n1 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.334 nvme0n1 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.334 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.594 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.594 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.594 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.594 11:19:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.594 nvme0n1 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.594 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.595 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.854 nvme0n1 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.854 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.855 nvme0n1 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.855 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:40.114 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.115 nvme0n1 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.115 11:19:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.683 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.683 nvme0n1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.684 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.942 nvme0n1 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:40.942 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.943 nvme0n1 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.943 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.200 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.201 nvme0n1 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.201 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.459 nvme0n1 00:35:41.459 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.459 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.459 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.459 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.459 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.459 11:19:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.459 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.028 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.029 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.288 nvme0n1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.288 11:19:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.547 nvme0n1 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:42.547 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.548 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.807 nvme0n1 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.807 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.066 nvme0n1 00:35:43.066 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.066 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.066 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.066 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.066 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.067 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.326 nvme0n1 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.326 11:19:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.725 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.293 nvme0n1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.293 11:19:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.553 nvme0n1 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.553 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.812 nvme0n1 00:35:45.812 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.812 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.812 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.812 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.812 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.812 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:46.070 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.071 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.329 nvme0n1 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:46.329 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.330 11:19:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.589 nvme0n1 00:35:46.589 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.589 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.589 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.589 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.589 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.589 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:46.847 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.848 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.414 nvme0n1 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:47.414 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.415 11:19:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.981 nvme0n1 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.981 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.549 nvme0n1 00:35:48.549 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.549 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.549 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.549 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.549 11:19:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.549 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.550 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.117 nvme0n1 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.117 11:19:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.682 nvme0n1 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.682 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.940 nvme0n1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.940 nvme0n1 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.940 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.199 nvme0n1 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.199 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.200 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.459 nvme0n1 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:50.459 11:19:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.459 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.717 nvme0n1 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:50.717 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.718 nvme0n1 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.718 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.979 nvme0n1 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.979 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.980 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.253 nvme0n1 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.253 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.514 nvme0n1 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.514 11:19:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.514 nvme0n1 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.514 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.773 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.774 nvme0n1 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.774 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.033 nvme0n1 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.033 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:52.293 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.294 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.294 nvme0n1 00:35:52.294 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.294 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.294 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.294 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.294 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.294 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.553 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.554 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:52.554 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:52.554 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.554 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:52.554 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.554 11:19:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.554 nvme0n1 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.554 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.826 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.827 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.828 nvme0n1 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.828 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.089 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.090 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.349 nvme0n1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.349 11:19:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.608 nvme0n1 00:35:53.608 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.608 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.608 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.608 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.608 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.608 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.867 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.126 nvme0n1 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.127 11:19:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.386 nvme0n1 00:35:54.386 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.386 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.386 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.386 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.386 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.645 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.646 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 nvme0n1 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.905 11:19:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.473 nvme0n1 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:55.473 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.474 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:55.732 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.733 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.301 nvme0n1 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.301 11:19:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.870 nvme0n1 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.871 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.440 nvme0n1 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.440 11:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.007 nvme0n1 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:58.007 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.008 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.293 nvme0n1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.293 nvme0n1 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.293 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.294 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.577 11:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.577 nvme0n1 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:58.577 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.578 nvme0n1 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.578 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.839 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.839 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.839 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.839 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.839 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.839 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.839 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.840 nvme0n1 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.840 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.100 nvme0n1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.100 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.360 nvme0n1 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:59.360 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.361 nvme0n1 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.361 11:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.620 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.621 nvme0n1 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.621 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.882 nvme0n1 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.882 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.142 nvme0n1 00:36:00.142 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.142 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.142 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.142 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.142 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.143 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.404 nvme0n1 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:00.404 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:00.405 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:00.405 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.405 11:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.663 nvme0n1 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:00.663 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:00.664 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:00.664 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:00.664 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.664 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.922 nvme0n1 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.922 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.180 nvme0n1 00:36:01.180 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.181 11:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.439 nvme0n1 00:36:01.439 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.439 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.439 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.439 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.439 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.439 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.698 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.958 nvme0n1 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.958 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.217 nvme0n1 00:36:02.217 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.217 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.217 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.217 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.217 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.217 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.478 11:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.738 nvme0n1 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.738 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.997 nvme0n1 00:36:02.997 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.998 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.998 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.998 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.998 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.998 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNmOTBjMzBiYmMwMmFiYmQyY2FiODNkYzJiZjc3MjJesSCH: 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzA3MWJkZThlOWNhMjU2MDdhNzA2MzdmYzM5ZDEwZDljY2NhYmUwYTkzMzhlZmI2ODkxN2RkYzNmODViNjUxMD42gVo=: 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.257 11:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.826 nvme0n1 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:03.826 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:03.827 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.827 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.827 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.395 nvme0n1 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.395 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.396 11:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.964 nvme0n1 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU0NTgxZGVhYTZlNmY3MDUzNWRmYjVmYzdiZTVkODYyNWE4Y2NiZjYyYmU2MzE2xvJlmg==: 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTRhZGM4YTFiOTUyOTA1Y2I2NDU5OTc0NjFmYWVhMmJD3u8C: 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:04.964 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.965 11:19:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.606 nvme0n1 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjZkZWMzNGUzYjBiMjQxNmE4MjY1ZThhNGNlNzZmMWNhZGQwMGNhODU5NThhMDU4MzRiNDIxNjc5NzY1MTY4M3xvZWE=: 00:36:05.606 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.607 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.176 nvme0n1 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.176 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.177 2024/12/05 11:19:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:06.177 request: 00:36:06.177 { 00:36:06.177 "method": "bdev_nvme_attach_controller", 00:36:06.177 "params": { 00:36:06.177 "name": "nvme0", 00:36:06.177 "trtype": "tcp", 00:36:06.177 "traddr": "10.0.0.1", 00:36:06.177 "adrfam": "ipv4", 00:36:06.177 "trsvcid": "4420", 00:36:06.177 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:06.177 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:06.177 "prchk_reftag": false, 00:36:06.177 "prchk_guard": false, 00:36:06.177 "hdgst": false, 00:36:06.177 "ddgst": false, 00:36:06.177 "allow_unrecognized_csi": false 00:36:06.177 } 00:36:06.177 } 00:36:06.177 Got JSON-RPC error response 00:36:06.177 GoRPCClient: error on JSON-RPC call 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.177 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.437 2024/12/05 11:19:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:06.437 request: 00:36:06.437 { 00:36:06.437 "method": "bdev_nvme_attach_controller", 00:36:06.437 "params": { 00:36:06.437 "name": "nvme0", 00:36:06.437 "trtype": "tcp", 00:36:06.437 "traddr": "10.0.0.1", 00:36:06.437 "adrfam": "ipv4", 00:36:06.437 "trsvcid": "4420", 00:36:06.437 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:06.437 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:06.437 "prchk_reftag": false, 00:36:06.437 "prchk_guard": false, 00:36:06.437 "hdgst": false, 00:36:06.437 "ddgst": false, 00:36:06.437 "dhchap_key": "key2", 00:36:06.437 "allow_unrecognized_csi": false 00:36:06.437 } 00:36:06.437 } 00:36:06.437 Got JSON-RPC error response 00:36:06.437 GoRPCClient: error on JSON-RPC call 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.437 11:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.438 2024/12/05 11:19:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:36:06.438 request: 00:36:06.438 { 00:36:06.438 "method": "bdev_nvme_attach_controller", 00:36:06.438 "params": { 00:36:06.438 "name": "nvme0", 00:36:06.438 "trtype": "tcp", 00:36:06.438 "traddr": "10.0.0.1", 00:36:06.438 "adrfam": "ipv4", 00:36:06.438 "trsvcid": "4420", 00:36:06.438 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:06.438 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:06.438 "prchk_reftag": false, 00:36:06.438 "prchk_guard": false, 00:36:06.438 "hdgst": false, 00:36:06.438 "ddgst": false, 00:36:06.438 "dhchap_key": "key1", 00:36:06.438 "dhchap_ctrlr_key": "ckey2", 00:36:06.438 "allow_unrecognized_csi": false 00:36:06.438 } 00:36:06.438 } 00:36:06.438 Got JSON-RPC error response 00:36:06.438 GoRPCClient: error on JSON-RPC call 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.438 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.697 nvme0n1 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.697 2024/12/05 11:19:31 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:36:06.697 request: 00:36:06.697 { 00:36:06.697 "method": "bdev_nvme_set_keys", 00:36:06.697 "params": { 00:36:06.697 "name": "nvme0", 00:36:06.697 "dhchap_key": "key1", 00:36:06.697 "dhchap_ctrlr_key": "ckey2" 00:36:06.697 } 00:36:06.697 } 00:36:06.697 Got JSON-RPC error response 00:36:06.697 GoRPCClient: error on JSON-RPC call 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:06.697 11:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:07.633 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.633 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.633 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:07.633 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.633 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjcwNDgyOThhN2QxYTRlZTgyNmNiOWE1ZWEyNWM5MmZjOTAyMGE5NjU3OTVlNGI0+tEj6Q==: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTQyM2VkMWNlOWI1ZDk0M2FkMTU4ZGM0NWRiNjcwNWI4Y2U0MTA2NmJjY2Q1MzQ5lSIHhA==: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.893 nvme0n1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVhMTFjODQ1YmNlYTM5NWRmYTMzMTQ3MzAxNmFkZGZ0VkHG: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGUyZjdhN2RiNmQ4ZjhjYTU3ZWJlOTBlOTA1NTNlYTOZf+O3: 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.893 2024/12/05 11:19:32 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:36:07.893 request: 00:36:07.893 { 00:36:07.893 "method": "bdev_nvme_set_keys", 00:36:07.893 "params": { 00:36:07.893 "name": "nvme0", 00:36:07.893 "dhchap_key": "key2", 00:36:07.893 "dhchap_ctrlr_key": "ckey1" 00:36:07.893 } 00:36:07.893 } 00:36:07.893 Got JSON-RPC error response 00:36:07.893 GoRPCClient: error on JSON-RPC call 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:07.893 11:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:09.271 rmmod nvme_tcp 00:36:09.271 rmmod nvme_fabrics 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 92724 ']' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 92724 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92724 ']' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92724 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92724 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92724' 00:36:09.271 killing process with pid 92724 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92724 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92724 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:36:09.271 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:09.530 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:36:09.531 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:09.531 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:09.531 11:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:09.531 11:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:09.531 11:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:36:09.531 11:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:36:09.531 11:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:10.097 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:10.356 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:10.356 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:10.356 11:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Vtc /tmp/spdk.key-null.TE3 /tmp/spdk.key-sha256.CXG /tmp/spdk.key-sha384.TK5 /tmp/spdk.key-sha512.lqy /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:36:10.356 11:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:10.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:10.921 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:10.921 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:10.921 00:36:10.921 real 0m36.533s 00:36:10.921 user 0m34.057s 00:36:10.921 sys 0m5.285s 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.921 ************************************ 00:36:10.921 END TEST nvmf_auth_host 00:36:10.921 ************************************ 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.921 ************************************ 00:36:10.921 START TEST nvmf_digest 00:36:10.921 ************************************ 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:10.921 * Looking for test storage... 00:36:10.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:36:10.921 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:11.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.181 --rc genhtml_branch_coverage=1 00:36:11.181 --rc genhtml_function_coverage=1 00:36:11.181 --rc genhtml_legend=1 00:36:11.181 --rc geninfo_all_blocks=1 00:36:11.181 --rc geninfo_unexecuted_blocks=1 00:36:11.181 00:36:11.181 ' 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:11.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.181 --rc genhtml_branch_coverage=1 00:36:11.181 --rc genhtml_function_coverage=1 00:36:11.181 --rc genhtml_legend=1 00:36:11.181 --rc geninfo_all_blocks=1 00:36:11.181 --rc geninfo_unexecuted_blocks=1 00:36:11.181 00:36:11.181 ' 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:11.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.181 --rc genhtml_branch_coverage=1 00:36:11.181 --rc genhtml_function_coverage=1 00:36:11.181 --rc genhtml_legend=1 00:36:11.181 --rc geninfo_all_blocks=1 00:36:11.181 --rc geninfo_unexecuted_blocks=1 00:36:11.181 00:36:11.181 ' 00:36:11.181 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:11.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.181 --rc genhtml_branch_coverage=1 00:36:11.181 --rc genhtml_function_coverage=1 00:36:11.181 --rc genhtml_legend=1 00:36:11.182 --rc geninfo_all_blocks=1 00:36:11.182 --rc geninfo_unexecuted_blocks=1 00:36:11.182 00:36:11.182 ' 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:36:11.182 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@280 -- # nvmf_veth_init 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@223 -- # create_target_ns 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:11.182 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # create_main_bridge 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@105 -- # delete_main_bridge 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator0 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target0 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0 up 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target0_br 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target0 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:36:11.183 10.0.0.1 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.183 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:36:11.184 10.0.0.2 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator0 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:36:11.184 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target0_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator1 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target1 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1 up 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target1_br 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target1 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772163 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:36:11.443 10.0.0.3 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.443 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772164 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:36:11.444 10.0.0.4 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator1 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:36:11.444 11:19:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target1_br 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 2 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:11.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:36:11.444 00:36:11.444 --- 10.0.0.1 ping statistics --- 00:36:11.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.444 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:11.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:36:11.444 00:36:11.444 --- 10.0.0.2 ping statistics --- 00:36:11.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.444 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:36:11.444 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:11.445 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.445 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.445 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:36:11.445 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:36:11.703 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:11.703 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:36:11.703 00:36:11.703 --- 10.0.0.3 ping statistics --- 00:36:11.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.703 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:36:11.703 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:11.703 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:36:11.703 00:36:11.703 --- 10.0.0.4 ping statistics --- 00:36:11.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.703 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # return 0 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:11.703 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:36:11.704 ' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:11.704 ************************************ 00:36:11.704 START TEST nvmf_digest_clean 00:36:11.704 ************************************ 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=94641 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 94641 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94641 ']' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.704 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:11.704 [2024-12-05 11:19:36.295687] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:11.704 [2024-12-05 11:19:36.295792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.962 [2024-12-05 11:19:36.452197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.962 [2024-12-05 11:19:36.508075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:11.962 [2024-12-05 11:19:36.508137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:11.962 [2024-12-05 11:19:36.508152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:11.962 [2024-12-05 11:19:36.508165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:11.962 [2024-12-05 11:19:36.508176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:11.963 [2024-12-05 11:19:36.508553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.963 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:12.221 null0 00:36:12.221 [2024-12-05 11:19:36.707459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.221 [2024-12-05 11:19:36.731639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94679 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94679 /var/tmp/bperf.sock 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94679 ']' 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:12.221 11:19:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:12.221 [2024-12-05 11:19:36.794912] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:12.221 [2024-12-05 11:19:36.795021] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94679 ] 00:36:12.489 [2024-12-05 11:19:36.947469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.489 [2024-12-05 11:19:37.005952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.489 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.489 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:12.489 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:12.489 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:12.489 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:13.055 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:13.055 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:13.318 nvme0n1 00:36:13.318 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:13.318 11:19:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.318 Running I/O for 2 seconds... 00:36:15.634 23240.00 IOPS, 90.78 MiB/s [2024-12-05T11:19:40.286Z] 23283.00 IOPS, 90.95 MiB/s 00:36:15.634 Latency(us) 00:36:15.634 [2024-12-05T11:19:40.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.634 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:15.634 nvme0n1 : 2.00 23307.41 91.04 0.00 0.00 5486.22 2902.31 17601.10 00:36:15.634 [2024-12-05T11:19:40.286Z] =================================================================================================================== 00:36:15.634 [2024-12-05T11:19:40.286Z] Total : 23307.41 91.04 0.00 0.00 5486.22 2902.31 17601.10 00:36:15.634 { 00:36:15.634 "results": [ 00:36:15.634 { 00:36:15.634 "job": "nvme0n1", 00:36:15.634 "core_mask": "0x2", 00:36:15.634 "workload": "randread", 00:36:15.634 "status": "finished", 00:36:15.634 "queue_depth": 128, 00:36:15.634 "io_size": 4096, 00:36:15.634 "runtime": 2.003397, 00:36:15.634 "iops": 23307.41236010636, 00:36:15.634 "mibps": 91.04457953166546, 00:36:15.634 "io_failed": 0, 00:36:15.634 "io_timeout": 0, 00:36:15.634 "avg_latency_us": 5486.222435430676, 00:36:15.634 "min_latency_us": 2902.308571428571, 00:36:15.634 "max_latency_us": 17601.097142857143 00:36:15.634 } 00:36:15.634 ], 00:36:15.634 "core_count": 1 00:36:15.634 } 00:36:15.634 11:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:15.634 11:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:15.634 11:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:15.634 11:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:15.634 11:19:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:15.634 | select(.opcode=="crc32c") 00:36:15.634 | "\(.module_name) \(.executed)"' 00:36:15.634 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:15.634 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:15.634 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:15.634 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:15.634 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94679 00:36:15.634 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94679 ']' 00:36:15.634 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94679 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94679 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:15.892 killing process with pid 94679 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94679' 00:36:15.892 Received shutdown signal, test time was about 2.000000 seconds 00:36:15.892 00:36:15.892 Latency(us) 00:36:15.892 [2024-12-05T11:19:40.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.892 [2024-12-05T11:19:40.544Z] =================================================================================================================== 00:36:15.892 [2024-12-05T11:19:40.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94679 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94679 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94750 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94750 /var/tmp/bperf.sock 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94750 ']' 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.892 11:19:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:16.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:16.151 Zero copy mechanism will not be used. 00:36:16.151 [2024-12-05 11:19:40.558467] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:16.151 [2024-12-05 11:19:40.558568] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94750 ] 00:36:16.151 [2024-12-05 11:19:40.708288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.151 [2024-12-05 11:19:40.761111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.085 11:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.085 11:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:17.085 11:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:17.085 11:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:17.085 11:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.343 11:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.343 11:19:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.602 nvme0n1 00:36:17.602 11:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:17.602 11:19:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.602 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:17.602 Zero copy mechanism will not be used. 00:36:17.602 Running I/O for 2 seconds... 00:36:19.913 9170.00 IOPS, 1146.25 MiB/s [2024-12-05T11:19:44.565Z] 9151.00 IOPS, 1143.88 MiB/s 00:36:19.913 Latency(us) 00:36:19.913 [2024-12-05T11:19:44.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.913 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:19.913 nvme0n1 : 2.00 9146.44 1143.31 0.00 0.00 1746.47 522.73 6054.28 00:36:19.913 [2024-12-05T11:19:44.565Z] =================================================================================================================== 00:36:19.913 [2024-12-05T11:19:44.565Z] Total : 9146.44 1143.31 0.00 0.00 1746.47 522.73 6054.28 00:36:19.913 { 00:36:19.913 "results": [ 00:36:19.913 { 00:36:19.913 "job": "nvme0n1", 00:36:19.913 "core_mask": "0x2", 00:36:19.913 "workload": "randread", 00:36:19.913 "status": "finished", 00:36:19.913 "queue_depth": 16, 00:36:19.913 "io_size": 131072, 00:36:19.913 "runtime": 2.002855, 00:36:19.913 "iops": 9146.443451972309, 00:36:19.913 "mibps": 1143.3054314965386, 00:36:19.913 "io_failed": 0, 00:36:19.913 "io_timeout": 0, 00:36:19.913 "avg_latency_us": 1746.4678161367717, 00:36:19.913 "min_latency_us": 522.7276190476191, 00:36:19.913 "max_latency_us": 6054.278095238095 00:36:19.913 } 00:36:19.913 ], 00:36:19.913 "core_count": 1 00:36:19.913 } 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:19.913 | select(.opcode=="crc32c") 00:36:19.913 | "\(.module_name) \(.executed)"' 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94750 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94750 ']' 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94750 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.913 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94750 00:36:20.172 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:20.172 killing process with pid 94750 00:36:20.172 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:20.172 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94750' 00:36:20.172 Received shutdown signal, test time was about 2.000000 seconds 00:36:20.172 00:36:20.172 Latency(us) 00:36:20.172 [2024-12-05T11:19:44.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.172 [2024-12-05T11:19:44.824Z] =================================================================================================================== 00:36:20.172 [2024-12-05T11:19:44.824Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.172 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94750 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94750 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94841 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94841 /var/tmp/bperf.sock 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94841 ']' 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.173 11:19:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:20.173 [2024-12-05 11:19:44.775816] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:20.173 [2024-12-05 11:19:44.775905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94841 ] 00:36:20.432 [2024-12-05 11:19:44.916745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.432 [2024-12-05 11:19:44.967639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.432 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.432 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:20.432 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:20.432 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:20.432 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:20.691 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:20.691 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:21.257 nvme0n1 00:36:21.257 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:21.257 11:19:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:21.257 Running I/O for 2 seconds... 00:36:23.163 27756.00 IOPS, 108.42 MiB/s [2024-12-05T11:19:47.815Z] 27949.00 IOPS, 109.18 MiB/s 00:36:23.163 Latency(us) 00:36:23.163 [2024-12-05T11:19:47.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.163 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:23.163 nvme0n1 : 2.01 27958.52 109.21 0.00 0.00 4571.90 2418.59 12420.63 00:36:23.163 [2024-12-05T11:19:47.815Z] =================================================================================================================== 00:36:23.163 [2024-12-05T11:19:47.815Z] Total : 27958.52 109.21 0.00 0.00 4571.90 2418.59 12420.63 00:36:23.163 { 00:36:23.163 "results": [ 00:36:23.163 { 00:36:23.163 "job": "nvme0n1", 00:36:23.163 "core_mask": "0x2", 00:36:23.163 "workload": "randwrite", 00:36:23.163 "status": "finished", 00:36:23.163 "queue_depth": 128, 00:36:23.163 "io_size": 4096, 00:36:23.163 "runtime": 2.005578, 00:36:23.163 "iops": 27958.523677463556, 00:36:23.163 "mibps": 109.21298311509202, 00:36:23.163 "io_failed": 0, 00:36:23.163 "io_timeout": 0, 00:36:23.163 "avg_latency_us": 4571.898241628897, 00:36:23.163 "min_latency_us": 2418.5904761904762, 00:36:23.163 "max_latency_us": 12420.63238095238 00:36:23.163 } 00:36:23.163 ], 00:36:23.163 "core_count": 1 00:36:23.163 } 00:36:23.163 11:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:23.163 11:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:23.163 11:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:23.163 11:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:23.163 | select(.opcode=="crc32c") 00:36:23.163 | "\(.module_name) \(.executed)"' 00:36:23.163 11:19:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94841 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94841 ']' 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94841 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94841 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:23.423 killing process with pid 94841 00:36:23.423 Received shutdown signal, test time was about 2.000000 seconds 00:36:23.423 00:36:23.423 Latency(us) 00:36:23.423 [2024-12-05T11:19:48.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.423 [2024-12-05T11:19:48.075Z] =================================================================================================================== 00:36:23.423 [2024-12-05T11:19:48.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94841' 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94841 00:36:23.423 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94841 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94919 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94919 /var/tmp/bperf.sock 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94919 ']' 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.682 11:19:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.682 [2024-12-05 11:19:48.276336] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:23.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.682 Zero copy mechanism will not be used. 00:36:23.683 [2024-12-05 11:19:48.276952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94919 ] 00:36:23.941 [2024-12-05 11:19:48.421001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.942 [2024-12-05 11:19:48.474427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.878 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.878 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:24.878 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:24.878 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:24.878 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:24.878 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:24.878 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.136 nvme0n1 00:36:25.136 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:25.136 11:19:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:25.395 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:25.395 Zero copy mechanism will not be used. 00:36:25.395 Running I/O for 2 seconds... 00:36:27.268 8474.00 IOPS, 1059.25 MiB/s [2024-12-05T11:19:51.920Z] 8521.00 IOPS, 1065.12 MiB/s 00:36:27.269 Latency(us) 00:36:27.269 [2024-12-05T11:19:51.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.269 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:27.269 nvme0n1 : 2.00 8518.16 1064.77 0.00 0.00 1874.77 1318.52 11421.99 00:36:27.269 [2024-12-05T11:19:51.921Z] =================================================================================================================== 00:36:27.269 [2024-12-05T11:19:51.921Z] Total : 8518.16 1064.77 0.00 0.00 1874.77 1318.52 11421.99 00:36:27.269 { 00:36:27.269 "results": [ 00:36:27.269 { 00:36:27.269 "job": "nvme0n1", 00:36:27.269 "core_mask": "0x2", 00:36:27.269 "workload": "randwrite", 00:36:27.269 "status": "finished", 00:36:27.269 "queue_depth": 16, 00:36:27.269 "io_size": 131072, 00:36:27.269 "runtime": 2.003131, 00:36:27.269 "iops": 8518.164812985271, 00:36:27.269 "mibps": 1064.7706016231589, 00:36:27.269 "io_failed": 0, 00:36:27.269 "io_timeout": 0, 00:36:27.269 "avg_latency_us": 1874.7668962360774, 00:36:27.269 "min_latency_us": 1318.5219047619048, 00:36:27.269 "max_latency_us": 11421.988571428572 00:36:27.269 } 00:36:27.269 ], 00:36:27.269 "core_count": 1 00:36:27.269 } 00:36:27.528 11:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:27.528 11:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:27.528 11:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:27.528 11:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:27.528 11:19:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:27.528 | select(.opcode=="crc32c") 00:36:27.528 | "\(.module_name) \(.executed)"' 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94919 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94919 ']' 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94919 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94919 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:27.787 killing process with pid 94919 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94919' 00:36:27.787 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94919 00:36:27.787 Received shutdown signal, test time was about 2.000000 seconds 00:36:27.787 00:36:27.787 Latency(us) 00:36:27.787 [2024-12-05T11:19:52.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.788 [2024-12-05T11:19:52.440Z] =================================================================================================================== 00:36:27.788 [2024-12-05T11:19:52.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.788 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94919 00:36:27.788 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94641 00:36:27.788 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94641 ']' 00:36:27.788 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94641 00:36:27.788 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:27.788 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.788 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94641 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:28.047 killing process with pid 94641 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94641' 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94641 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94641 00:36:28.047 00:36:28.047 real 0m16.386s 00:36:28.047 user 0m31.200s 00:36:28.047 sys 0m4.965s 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:28.047 ************************************ 00:36:28.047 END TEST nvmf_digest_clean 00:36:28.047 ************************************ 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.047 ************************************ 00:36:28.047 START TEST nvmf_digest_error 00:36:28.047 ************************************ 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=95032 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 95032 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95032 ']' 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.047 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.306 [2024-12-05 11:19:52.716708] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:28.306 [2024-12-05 11:19:52.716784] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.306 [2024-12-05 11:19:52.860321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.306 [2024-12-05 11:19:52.905106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.306 [2024-12-05 11:19:52.905155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.306 [2024-12-05 11:19:52.905165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.306 [2024-12-05 11:19:52.905174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.306 [2024-12-05 11:19:52.905181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.306 [2024-12-05 11:19:52.905456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.565 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.565 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:28.565 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:28.565 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.565 11:19:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 [2024-12-05 11:19:53.021890] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 null0 00:36:28.565 [2024-12-05 11:19:53.115944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:28.565 [2024-12-05 11:19:53.140073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95061 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95061 /var/tmp/bperf.sock 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95061 ']' 00:36:28.565 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:28.566 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:28.566 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:28.566 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.566 11:19:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:28.566 [2024-12-05 11:19:53.206963] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:28.566 [2024-12-05 11:19:53.207057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95061 ] 00:36:28.824 [2024-12-05 11:19:53.358237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.824 [2024-12-05 11:19:53.410672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:29.762 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.021 nvme0n1 00:36:30.021 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:30.021 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.021 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:30.279 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.280 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:30.280 11:19:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:30.280 Running I/O for 2 seconds... 00:36:30.280 [2024-12-05 11:19:54.781701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.781743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.781756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.792330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.792365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.792378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.802864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.802895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.802907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.812458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.812489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.812501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.823421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.823453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.823464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.834926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.834958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.834985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.846447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.846481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.846493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.857597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.857628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.857640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.869285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.869317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.869328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.880282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.880313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.880325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.890523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.890555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.890566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.900851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.900884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.900896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.911733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.911761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.911773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.280 [2024-12-05 11:19:54.923305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.280 [2024-12-05 11:19:54.923336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.280 [2024-12-05 11:19:54.923347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.540 [2024-12-05 11:19:54.935111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.540 [2024-12-05 11:19:54.935142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.540 [2024-12-05 11:19:54.935154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.540 [2024-12-05 11:19:54.945746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.540 [2024-12-05 11:19:54.945777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.540 [2024-12-05 11:19:54.945788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.540 [2024-12-05 11:19:54.956391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.540 [2024-12-05 11:19:54.956421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.540 [2024-12-05 11:19:54.956432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.540 [2024-12-05 11:19:54.966939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.540 [2024-12-05 11:19:54.966969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.540 [2024-12-05 11:19:54.966980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.540 [2024-12-05 11:19:54.977629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:54.977660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:54.977671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:54.988257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:54.988287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:54.988297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:54.999336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:54.999368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:54.999379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.010255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.010285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.010296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.020967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.020995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.021006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.031617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.031645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.031657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.042481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.042511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.042523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.053665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.053692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.053703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.064949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.064977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.064988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.074274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.074302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.074313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.086008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.086037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.086048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.097247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.097277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.097288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.108136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.108165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.108176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.118788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.118820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.118847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.130133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.130166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.130194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.140892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.140924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.140936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.151556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.151600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.151613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.162836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.162871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.162882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.174565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.174608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.174620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.541 [2024-12-05 11:19:55.186741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.541 [2024-12-05 11:19:55.186773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.541 [2024-12-05 11:19:55.186784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.198984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.199020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.199032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.210723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.210760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.210772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.222592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.222636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.222649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.233887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.233920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.233931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.244826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.244859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.244886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.256628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.256661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.256672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.267596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.267630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.267641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.278672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.278706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.278718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.289476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.289509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.289536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.300410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.300446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.300457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.309741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.309774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.309786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.320412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.320445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.320456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.330999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.331032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.331043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.341629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.341661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.341673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.352078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.352112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.352124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.361895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.361927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.361938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.371935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.371975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.371987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.382795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.382827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.382854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.393956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.393989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.394001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.404896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.404930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.404941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.416082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.416114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.416125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.427541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.427575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.427597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.439865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.439898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.439910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:30.800 [2024-12-05 11:19:55.452325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:30.800 [2024-12-05 11:19:55.452358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:30.800 [2024-12-05 11:19:55.452386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.095 [2024-12-05 11:19:55.461953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.095 [2024-12-05 11:19:55.461988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.095 [2024-12-05 11:19:55.461999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.095 [2024-12-05 11:19:55.473554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.095 [2024-12-05 11:19:55.473598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.095 [2024-12-05 11:19:55.473610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.095 [2024-12-05 11:19:55.484844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.095 [2024-12-05 11:19:55.484877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.095 [2024-12-05 11:19:55.484889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.095 [2024-12-05 11:19:55.495533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.095 [2024-12-05 11:19:55.495568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.095 [2024-12-05 11:19:55.495579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.095 [2024-12-05 11:19:55.506474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.095 [2024-12-05 11:19:55.506506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.095 [2024-12-05 11:19:55.506517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.095 [2024-12-05 11:19:55.517258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.095 [2024-12-05 11:19:55.517292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.095 [2024-12-05 11:19:55.517303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.095 [2024-12-05 11:19:55.528821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.528855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.528867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.539027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.539059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.539070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.549709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.549742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.549754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.560292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.560324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.560336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.570870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.570903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.570914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.581475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.581507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.581534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.592442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.592475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.592486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.603200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.603235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.603246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.613982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.614015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.614026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.624749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.624782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.624793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.635849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.635883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.635894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.646706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.646739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.646751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.657340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.657375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.657386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.667972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.668008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.668020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.678789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.678823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.678836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.689594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.689637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.689648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.700465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.700499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.700510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.096 [2024-12-05 11:19:55.711986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.096 [2024-12-05 11:19:55.712043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.096 [2024-12-05 11:19:55.712063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.724076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.724131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.724144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.735560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.735606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.735618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.746911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.746946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.746958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.757411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.757445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.757456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 23113.00 IOPS, 90.29 MiB/s [2024-12-05T11:19:56.022Z] [2024-12-05 11:19:55.769996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.770030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.770042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.779141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.779176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.779188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.790905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.790940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.790969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.802167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.802202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.802213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.814045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.814079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.814091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.825148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.825184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.825195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.836237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.836272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.836300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.847696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.847730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.847758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.858757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.858792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.858819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.869938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.869972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.869983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.880720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.880754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.880766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.890452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.890488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.890500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.901957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.901991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.902004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.913169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.913204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.913215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.923905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.923938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.923966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.934481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.934515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.934527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.945811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.945845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.945856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.956717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.956750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.956761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.967148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.967181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.967208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.978205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.978239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.370 [2024-12-05 11:19:55.978250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.370 [2024-12-05 11:19:55.988847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.370 [2024-12-05 11:19:55.988880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.371 [2024-12-05 11:19:55.988891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.371 [2024-12-05 11:19:55.999449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.371 [2024-12-05 11:19:55.999482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.371 [2024-12-05 11:19:55.999510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.371 [2024-12-05 11:19:56.010112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.371 [2024-12-05 11:19:56.010145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.371 [2024-12-05 11:19:56.010157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.371 [2024-12-05 11:19:56.020749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.371 [2024-12-05 11:19:56.020782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.371 [2024-12-05 11:19:56.020793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.031754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.031787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.031814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.042766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.042799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.042811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.054121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.054154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.054182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.065010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.065045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.075878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.075912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.075923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.086833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.086866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.086893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.096481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.096515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.096542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.107890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.107923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.107934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.119173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.119207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.119218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.130426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.130459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.130486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.141172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.141206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.141218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.151565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.151609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.151620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.162901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.162933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.162944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.173806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.173839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.173850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.184407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.184441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.184452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.195034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.195068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.630 [2024-12-05 11:19:56.195079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.630 [2024-12-05 11:19:56.205217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.630 [2024-12-05 11:19:56.205253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.631 [2024-12-05 11:19:56.205264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.631 [2024-12-05 11:19:56.214057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.631 [2024-12-05 11:19:56.214092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.631 [2024-12-05 11:19:56.214104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.631 [2024-12-05 11:19:56.227217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.631 [2024-12-05 11:19:56.227255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.631 [2024-12-05 11:19:56.227267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.631 [2024-12-05 11:19:56.239428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.631 [2024-12-05 11:19:56.239465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.631 [2024-12-05 11:19:56.239478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.631 [2024-12-05 11:19:56.252283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.631 [2024-12-05 11:19:56.252321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.631 [2024-12-05 11:19:56.252334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.631 [2024-12-05 11:19:56.264511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.631 [2024-12-05 11:19:56.264546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.631 [2024-12-05 11:19:56.264576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.631 [2024-12-05 11:19:56.276576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.631 [2024-12-05 11:19:56.276621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.631 [2024-12-05 11:19:56.276633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.286544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.286577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.286613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.297355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.297389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.297401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.307827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.307860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.307871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.317762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.317794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.317822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.330614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.330646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.330673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.341548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.341581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.341603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.352098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.352130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.352141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.361141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.361174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.361185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.372254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.372286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.372297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.382949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.382980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.383008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.394379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.394415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.394426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.405075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.405110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.405122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.415716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.415750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.415761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.426822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.890 [2024-12-05 11:19:56.426855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.890 [2024-12-05 11:19:56.426866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.890 [2024-12-05 11:19:56.436699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.436733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.436744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.447959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.447995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.448006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.458020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.458053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.458065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.468702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.468734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.468746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.480085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.480117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.480128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.491169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.491202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.491229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.502001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.502035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.502046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.513053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.513088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.513099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.523642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.523675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.523687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.534117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.534150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.534161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:31.891 [2024-12-05 11:19:56.543187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:31.891 [2024-12-05 11:19:56.543222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:31.891 [2024-12-05 11:19:56.543234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.150 [2024-12-05 11:19:56.553647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.150 [2024-12-05 11:19:56.553680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.150 [2024-12-05 11:19:56.553708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.150 [2024-12-05 11:19:56.564420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.150 [2024-12-05 11:19:56.564454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.150 [2024-12-05 11:19:56.564465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.150 [2024-12-05 11:19:56.575517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.150 [2024-12-05 11:19:56.575550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.150 [2024-12-05 11:19:56.575577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.150 [2024-12-05 11:19:56.586557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.150 [2024-12-05 11:19:56.586599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.150 [2024-12-05 11:19:56.586610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.150 [2024-12-05 11:19:56.596887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.150 [2024-12-05 11:19:56.596920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.150 [2024-12-05 11:19:56.596932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.150 [2024-12-05 11:19:56.608349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.150 [2024-12-05 11:19:56.608382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.150 [2024-12-05 11:19:56.608393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.150 [2024-12-05 11:19:56.618904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.618937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.629523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.629557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.629568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.640108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.640140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.640151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.651086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.651120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.651132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.661929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.661973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.662000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.672050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.672094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.681911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.681943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.681969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.694865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.694900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.694911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.703991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.704024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.704043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.714650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.714684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.714695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.725634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.725667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.725678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.736686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.736719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.736731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.747230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.747264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.747275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 [2024-12-05 11:19:56.757996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.758029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.758040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 23277.00 IOPS, 90.93 MiB/s [2024-12-05T11:19:56.803Z] [2024-12-05 11:19:56.767485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x80a200) 00:36:32.151 [2024-12-05 11:19:56.767520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:32.151 [2024-12-05 11:19:56.767531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:32.151 00:36:32.151 Latency(us) 00:36:32.151 [2024-12-05T11:19:56.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.151 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:32.151 nvme0n1 : 2.00 23292.14 90.98 0.00 0.00 5490.70 3120.76 14293.09 00:36:32.151 [2024-12-05T11:19:56.803Z] =================================================================================================================== 00:36:32.151 [2024-12-05T11:19:56.803Z] Total : 23292.14 90.98 0.00 0.00 5490.70 3120.76 14293.09 00:36:32.151 { 00:36:32.151 "results": [ 00:36:32.151 { 00:36:32.151 "job": "nvme0n1", 00:36:32.151 "core_mask": "0x2", 00:36:32.151 "workload": "randread", 00:36:32.151 "status": "finished", 00:36:32.151 "queue_depth": 128, 00:36:32.151 "io_size": 4096, 00:36:32.151 "runtime": 2.004195, 00:36:32.151 "iops": 23292.1447264363, 00:36:32.151 "mibps": 90.9849403376418, 00:36:32.151 "io_failed": 0, 00:36:32.151 "io_timeout": 0, 00:36:32.151 "avg_latency_us": 5490.700697607521, 00:36:32.151 "min_latency_us": 3120.7619047619046, 00:36:32.151 "max_latency_us": 14293.089523809524 00:36:32.151 } 00:36:32.151 ], 00:36:32.151 "core_count": 1 00:36:32.151 } 00:36:32.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:32.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:32.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:32.151 11:19:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:32.151 | .driver_specific 00:36:32.151 | .nvme_error 00:36:32.151 | .status_code 00:36:32.151 | .command_transient_transport_error' 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95061 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95061 ']' 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95061 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95061 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95061' 00:36:32.719 killing process with pid 95061 00:36:32.719 Received shutdown signal, test time was about 2.000000 seconds 00:36:32.719 00:36:32.719 Latency(us) 00:36:32.719 [2024-12-05T11:19:57.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.719 [2024-12-05T11:19:57.371Z] =================================================================================================================== 00:36:32.719 [2024-12-05T11:19:57.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95061 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95061 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95149 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95149 /var/tmp/bperf.sock 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95149 ']' 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.719 11:19:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:32.719 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:32.719 Zero copy mechanism will not be used. 00:36:32.719 [2024-12-05 11:19:57.351129] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:32.719 [2024-12-05 11:19:57.351233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95149 ] 00:36:32.978 [2024-12-05 11:19:57.502037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.978 [2024-12-05 11:19:57.550262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:33.915 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:34.174 nvme0n1 00:36:34.174 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:34.174 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.174 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.175 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.175 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:34.175 11:19:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:34.434 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:34.434 Zero copy mechanism will not be used. 00:36:34.434 Running I/O for 2 seconds... 00:36:34.434 [2024-12-05 11:19:58.906401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.434 [2024-12-05 11:19:58.906455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.434 [2024-12-05 11:19:58.906469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.434 [2024-12-05 11:19:58.910837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.434 [2024-12-05 11:19:58.910882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.910895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.915170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.915213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.915224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.918998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.919037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.919049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.921902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.921937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.921949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.925544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.925583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.925606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.929652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.929688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.929700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.933542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.933581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.933605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.937072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.937109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.937120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.940990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.941029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.941040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.943680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.943714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.943725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.947593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.947642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.947653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.951180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.951217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.951228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.954119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.954155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.954183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.957026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.957063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.957074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.960513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.960551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.960563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.964382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.964420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.964431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.967178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.967213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.967224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.970704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.970738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.970749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.974601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.974653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.974665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.978389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.978427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.978454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.982211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.982246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.982274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.984467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.984501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.984512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.988605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.988640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.988651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.992706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.992743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.992754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.995469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.995504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.995515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:58.998926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:58.998963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:58.998975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:59.002505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:59.002542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.435 [2024-12-05 11:19:59.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.435 [2024-12-05 11:19:59.006345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.435 [2024-12-05 11:19:59.006381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.006409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.010070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.010106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.013791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.013827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.013838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.017657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.017692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.017703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.021529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.021566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.021594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.024336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.024370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.024381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.027870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.027907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.027918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.031233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.031270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.031282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.034157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.034194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.034206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.037322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.037361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.037373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.040625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.040664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.040676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.043969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.044005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.044017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.046812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.046848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.046859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.049974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.050012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.050023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.053010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.053049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.053061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.056369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.056409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.056421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.059468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.059505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.059516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.062443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.062479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.062490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.066024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.066062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.066073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.069314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.069351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.069363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.072314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.072352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.072364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.075665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.075700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.075711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.079745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.079782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.079794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.082674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.082708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.082719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.436 [2024-12-05 11:19:59.086271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.436 [2024-12-05 11:19:59.086309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.436 [2024-12-05 11:19:59.086321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.090390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.090427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.090439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.093121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.093158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.093170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.096810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.096848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.096861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.100161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.100200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.100212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.103822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.103861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.103873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.106638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.106673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.106685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.110925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.110963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.110975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.697 [2024-12-05 11:19:59.113780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.697 [2024-12-05 11:19:59.113821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.697 [2024-12-05 11:19:59.113832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.117446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.117486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.121536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.121574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.121597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.125653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.125690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.125718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.129834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.129871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.129899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.132410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.132447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.132459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.136743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.136777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.136789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.139740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.139771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.139782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.142941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.142973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.142984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.146222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.146269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.146296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.150046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.150084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.150096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.153925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.153962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.153974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.157713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.157750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.157762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.161604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.161638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.161650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.164955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.164992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.165003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.168220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.168256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.168266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.170798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.170833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.170844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.175033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.175073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.175084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.177869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.177905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.177916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.181235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.181272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.181283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.184715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.184751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.184762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.187432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.187467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.187478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.191226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.191264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.191275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.194740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.194776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.194788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.197375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.197411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.197438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.200727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.200763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.200774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.203766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.203814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.698 [2024-12-05 11:19:59.203825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.698 [2024-12-05 11:19:59.207271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.698 [2024-12-05 11:19:59.207307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.207319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.210499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.210536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.210548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.213011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.213048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.213059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.216499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.216535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.216547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.220553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.220600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.220611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.223375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.223409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.223421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.226578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.226624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.226635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.230556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.230604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.230616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.234218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.234253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.234264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.236678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.236714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.236725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.239795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.239842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.239869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.242754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.242789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.242801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.245567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.245627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.245639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.248996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.249032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.249043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.251755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.251791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.251802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.255035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.255071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.255098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.257984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.258020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.258031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.261146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.261182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.261193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.263953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.263988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.264000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.267256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.267293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.267305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.269827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.269864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.269875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.272536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.272573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.272584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.275588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.275650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.275661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.278776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.278813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.278824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.281653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.281687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.281699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.284254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.284290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.284301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.288098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.288134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.288145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.699 [2024-12-05 11:19:59.290841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.699 [2024-12-05 11:19:59.290874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.699 [2024-12-05 11:19:59.290902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.294236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.294274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.294285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.298134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.298171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.298182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.301849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.301887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.301899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.305561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.305612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.305623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.309284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.309321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.309333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.313103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.313157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.313168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.316935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.316973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.317002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.321274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.321316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.321329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.325815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.325858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.325871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.330057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.330104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.330118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.334155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.334196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.334209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.338159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.338199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.338227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.342288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.342326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.342337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.700 [2024-12-05 11:19:59.346357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.700 [2024-12-05 11:19:59.346398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.700 [2024-12-05 11:19:59.346411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.350368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.350408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.350437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.354303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.354343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.354355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.358699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.358736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.358748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.362693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.362736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.362748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.366964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.367005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.367017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.370970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.371008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.371020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.374896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.374933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.374944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.378885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.378922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.961 [2024-12-05 11:19:59.378934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.961 [2024-12-05 11:19:59.382872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.961 [2024-12-05 11:19:59.382908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.382919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.386768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.386804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.386816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.390639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.390673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.390685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.394437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.394474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.394485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.398294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.398332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.398343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.402009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.402046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.402057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.405836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.405873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.405884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.409449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.409485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.409497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.413247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.413283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.413294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.417045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.417083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.417094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.420854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.420890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.420902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.424426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.424463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.424474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.427732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.427766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.427777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.431384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.431422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.431433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.434929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.434965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.434976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.438667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.438702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.438713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.442402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.442439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.442466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.446269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.446305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.446332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.450004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.450040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.450051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.453833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.453870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.453881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.457178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.457214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.457225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.459768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.459802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.459829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.463020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.463055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.463066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.466319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.466355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.466367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.469329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.469364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.469376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.472170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.472207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.472218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.475697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.475741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.962 [2024-12-05 11:19:59.475768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.962 [2024-12-05 11:19:59.479768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.962 [2024-12-05 11:19:59.479803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.483628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.483662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.483690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.487421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.487458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.487486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.491235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.491271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.491299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.494805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.494842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.494853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.498290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.498325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.498337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.501918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.501953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.501965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.505742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.505778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.505789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.509036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.509071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.509082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.511439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.511474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.511502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.515487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.515525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.515536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.519452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.519488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.519515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.523230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.523266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.523294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.525687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.525721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.525733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.529580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.529642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.529653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.533432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.533480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.537002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.537040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.537052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.540355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.540390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.540401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.543094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.543130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.543141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.546473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.546509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.546521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.549195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.549232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.549243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.552587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.552633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.552644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.555702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.555737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.555748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.558844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.558878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.558906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.562013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.562050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.562061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.565282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.565318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.565328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.568427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.568462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.963 [2024-12-05 11:19:59.568473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.963 [2024-12-05 11:19:59.571566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.963 [2024-12-05 11:19:59.571627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.571639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.574735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.574770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.574781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.577910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.577945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.577957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.581098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.581133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.581144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.584234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.584270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.584282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.587233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.587267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.587294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.590078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.590113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.590123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.592926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.592962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.592973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.596239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.596276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.596287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.600078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.600126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.603777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.603812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.603840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.607626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.607659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.607687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:34.964 [2024-12-05 11:19:59.611281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:34.964 [2024-12-05 11:19:59.611317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.964 [2024-12-05 11:19:59.611328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.615025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.615062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.615073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.618726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.618761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.618772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.622289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.622326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.622337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.625963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.625999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.626011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.629534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.629571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.629582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.633190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.633227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.633239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.636975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.637011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.637023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.640800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.640837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.640848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.644659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.644697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.644710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.648804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.648845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.648858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.652961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.653000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.653013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.656875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.656914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.656926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.660836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.660874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.660887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.664663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.664698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.664709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.668276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.668311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.668322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.671394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.671430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.671442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.225 [2024-12-05 11:19:59.674400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.225 [2024-12-05 11:19:59.674436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.225 [2024-12-05 11:19:59.674447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.677619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.677653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.677665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.680229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.680267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.680278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.683796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.683831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.683842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.687880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.687916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.687928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.691605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.691638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.691666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.694265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.694298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.694309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.697588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.697638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.697666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.701558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.701623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.701635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.705354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.705390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.705417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.709091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.709128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.709140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.711486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.711522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.711534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.715533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.715570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.715581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.719451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.719487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.719499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.723411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.723449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.723460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.726200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.726234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.726245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.729573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.729619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.729631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.733421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.733458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.733486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.737017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.737054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.737065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.739235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.739269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.739280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.743000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.743035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.743063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.746962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.746998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.747009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.750875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.750911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.750939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.753661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.753694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.753705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.757034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.757071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.757082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.760917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.760954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.760965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.764580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.764624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.764636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.226 [2024-12-05 11:19:59.766854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.226 [2024-12-05 11:19:59.766888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.226 [2024-12-05 11:19:59.766899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.770475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.770513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.770525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.774164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.774201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.774213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.777876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.777914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.777925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.781495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.781533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.781544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.784894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.784930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.784942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.788692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.788728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.788739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.791380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.791416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.791427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.794549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.794599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.794610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.797644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.797677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.797689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.801103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.801142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.801155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.804844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.804885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.804898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.807824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.807860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.807872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.810882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.810919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.810931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.814203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.814239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.814251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.816843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.816881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.816892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.819412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.819446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.819458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.823025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.823063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.823074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.827149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.827188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.827199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.831250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.831297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.831308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.835133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.835169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.835180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.837346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.837382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.837393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.841293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.841331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.841342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.843871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.843906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.843918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.847280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.847317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.847328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.851013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.851051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.851062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.853666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.853700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.853711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.856608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.227 [2024-12-05 11:19:59.856655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.227 [2024-12-05 11:19:59.856666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.227 [2024-12-05 11:19:59.859550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.228 [2024-12-05 11:19:59.859615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.228 [2024-12-05 11:19:59.859627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.228 [2024-12-05 11:19:59.863008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.228 [2024-12-05 11:19:59.863043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.228 [2024-12-05 11:19:59.863069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.228 [2024-12-05 11:19:59.865505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.228 [2024-12-05 11:19:59.865542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.228 [2024-12-05 11:19:59.865553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.228 [2024-12-05 11:19:59.869059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.228 [2024-12-05 11:19:59.869096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.228 [2024-12-05 11:19:59.869107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.228 [2024-12-05 11:19:59.872555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.228 [2024-12-05 11:19:59.872605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.228 [2024-12-05 11:19:59.872619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.228 [2024-12-05 11:19:59.875157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.228 [2024-12-05 11:19:59.875202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.228 [2024-12-05 11:19:59.875229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.488 [2024-12-05 11:19:59.878509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.878547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.878559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.881465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.881502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.881514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.884245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.884280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.884291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.887828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.887865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.887876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.891368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.891404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.891431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.893969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.894005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.894016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.897220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.897258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.897269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.489 8927.00 IOPS, 1115.88 MiB/s [2024-12-05T11:20:00.141Z] [2024-12-05 11:19:59.901367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.901403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.901414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.904682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.904716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.904727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.908201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.908236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.908248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.911829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.911866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.911877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.915621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.915657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.915668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.919565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.919634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.919646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.923614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.923649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.923660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.927402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.927439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.927450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.931275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.931311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.931323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.935231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.935269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.935280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.939134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.939171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.939183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.943262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.943302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.943314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.947572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.947636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.947648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.951693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.951729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.951741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.955755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.955791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.955802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.959814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.489 [2024-12-05 11:19:59.959851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.489 [2024-12-05 11:19:59.959863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.489 [2024-12-05 11:19:59.963581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.963628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.963640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.967279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.967317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.967329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.971007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.971048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.971061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.974319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.974359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.974372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.977793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.977832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.977845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.981633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.981669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.981681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.984505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.984546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.984559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.988375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.988417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.988430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.992647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.992688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.992702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.995819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.995856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.995868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:19:59.999483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:19:59.999522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:19:59.999534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.003770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.003809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.003823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.008199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.008241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.008254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.012387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.012429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.012442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.016101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.016139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.016152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.019796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.019834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.019845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.024014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.024084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.024097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.026805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.026855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.026883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.030621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.030658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.030670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.034271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.034307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.034318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.037284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.037321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.037332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.041527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.041565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.041577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.045668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.045701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.045713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.049742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.049780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.049792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.053541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.053583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.053612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.057025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.057062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.057073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.060657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.060692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.490 [2024-12-05 11:20:00.060703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.490 [2024-12-05 11:20:00.064384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.490 [2024-12-05 11:20:00.064420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.064432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.068085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.068122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.068134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.071739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.071775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.071787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.075551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.075600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.075613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.079427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.079465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.079476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.083382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.083421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.083432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.087333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.087372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.087383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.091329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.091368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.091379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.095183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.095220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.095232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.098632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.098668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.098679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.102509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.102546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.102573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.106155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.106191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.106202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.109999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.110036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.110063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.113896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.113936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.113964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.117793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.117830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.117842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.121833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.121869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.121898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.126009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.126046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.126057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.129680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.129716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.129727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.133504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.133542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.133553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.137384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.137421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.137432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.491 [2024-12-05 11:20:00.140845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.491 [2024-12-05 11:20:00.140882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.491 [2024-12-05 11:20:00.140892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.144827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.144868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.144880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.148489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.148528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.148541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.152390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.152428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.152439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.156171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.156209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.156221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.159983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.160021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.160042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.163766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.163805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.163816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.167346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.167382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.167394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.170924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.170961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.170988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.174253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.174288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.174300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.176978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.177014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.177025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.180499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.180547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.180565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.184682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.184720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.184732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.187074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.187111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.187122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.190783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.190817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.190829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.194741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.194776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.194787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.752 [2024-12-05 11:20:00.198542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.752 [2024-12-05 11:20:00.198580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.752 [2024-12-05 11:20:00.198605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.202580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.202627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.202638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.206799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.206836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.206848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.210305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.210341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.210353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.213485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.213523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.213535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.216976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.217016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.217028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.219620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.219653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.219664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.222996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.223033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.223044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.226939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.226976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.226987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.230823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.230860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.230872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.234850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.234890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.234902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.237554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.237609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.237622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.241507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.241545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.241557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.245425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.245464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.245477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.249228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.249267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.249280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.253201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.253240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.253252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.257075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.257113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.257125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.261293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.261333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.261345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.265461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.265497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.265508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.269288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.269325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.269337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.273206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.273244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.273255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.277120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.277158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.277169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.280940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.280979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.280990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.284578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.284627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.284638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.288363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.288399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.288410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.292160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.292194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.292205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.296087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.296123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.296135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.753 [2024-12-05 11:20:00.299708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.753 [2024-12-05 11:20:00.299743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.753 [2024-12-05 11:20:00.299754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.303675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.303711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.303723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.307611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.307644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.307672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.311259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.311295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.311322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.314913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.314949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.314960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.318797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.318831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.318842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.322440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.322475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.322486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.325996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.326032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.326059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.329908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.329946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.329957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.333781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.333817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.333828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.337776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.337812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.337823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.341762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.341796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.341808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.345792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.345827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.345838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.349673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.349709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.349721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.353712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.353749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.353761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.357905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.357944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.357957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.362065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.362102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.362130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.366115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.366180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.370090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.370126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.370137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.373985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.374022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.374033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.377678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.377711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.377722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.381479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.381516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.381527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.384634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.384667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.384678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.387266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.387301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.387312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.390190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.390236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.393538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.393573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.393584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.396539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.396576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.396599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:35.754 [2024-12-05 11:20:00.399502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.754 [2024-12-05 11:20:00.399538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.754 [2024-12-05 11:20:00.399549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:35.755 [2024-12-05 11:20:00.402700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:35.755 [2024-12-05 11:20:00.402734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.755 [2024-12-05 11:20:00.402745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.406015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.406050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.406061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.409271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.409308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.409319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.412134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.412169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.412179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.415098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.415135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.415147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.418164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.418199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.418226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.421024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.421062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.421073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.424554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.424599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.424612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.427172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.427208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.427219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.430308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.430344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.430355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.433634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.433669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.433697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.436188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.436225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.436236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.439591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.439639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.439650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.442379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.442413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.442425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.445658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.445691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.445702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.448452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.448489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.448500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.451687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.451723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.451733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.016 [2024-12-05 11:20:00.454879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.016 [2024-12-05 11:20:00.454915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.016 [2024-12-05 11:20:00.454942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.457530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.457566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.457578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.460774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.460811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.460822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.463801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.463835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.463846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.466942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.466977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.467004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.469870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.469906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.469917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.472961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.472997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.473008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.475686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.475720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.475731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.478897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.478933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.478943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.481680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.481714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.481735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.484855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.484892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.484903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.487871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.487905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.487916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.490271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.490305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.490332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.494030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.494066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.494077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.498008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.498045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.498072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.501401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.501437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.501448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.503744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.503788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.503815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.507704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.507738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.507766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.511671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.511704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.511732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.515474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.515511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.515538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.518269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.518303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.518330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.521661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.521696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.521708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.525442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.525481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.525493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.527906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.527939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.527950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.531197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.531233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.531244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.534858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.534893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.534903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.538552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.538614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.017 [2024-12-05 11:20:00.538626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.017 [2024-12-05 11:20:00.542162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.017 [2024-12-05 11:20:00.542197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.542208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.545648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.545682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.545694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.549260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.549297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.549308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.552994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.553031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.556724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.556761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.556774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.560434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.560471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.560481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.564205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.564241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.564252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.567606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.567639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.567650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.570324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.570360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.570371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.573763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.573799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.573811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.577522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.577558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.577569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.581452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.581489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.581501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.585084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.585121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.585132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.587581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.587626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.587638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.591652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.591687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.591713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.595485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.595521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.595549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.599250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.599287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.599314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.602028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.602063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.602074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.605450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.605487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.605499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.609347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.609384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.609396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.613359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.613395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.613407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.617156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.617192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.617203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.619477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.619511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.619523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.623258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.623295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.627056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.627092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.627104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.630811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.630846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.630874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.634312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.634349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.018 [2024-12-05 11:20:00.634361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.018 [2024-12-05 11:20:00.637980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.018 [2024-12-05 11:20:00.638016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.638027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.641651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.641683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.641694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.645008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.645045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.645056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.648475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.648511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.648523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.651103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.651138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.651149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.654180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.654217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.657466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.657502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.657513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.660460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.660498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.660509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.663893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.663931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.663943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.019 [2024-12-05 11:20:00.666415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.019 [2024-12-05 11:20:00.666453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.019 [2024-12-05 11:20:00.666464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.669744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.669781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.669792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.672665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.672696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.672708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.676100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.676136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.676147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.679983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.680020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.680038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.682536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.682570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.682580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.686026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.686062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.686074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.689996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.690031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.690042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.693908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.693944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.693955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.696561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.696607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.696619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.699651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.699684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.699695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.703243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.703278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.703288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.707039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.707075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.707086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.710447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.710483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.710494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.714194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.714230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.714241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.717806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.717841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.717869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.721321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.721359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.721370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.724798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.724833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.724844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.728360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.728396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.728407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.732019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.732080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.732092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.735743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.735777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.735789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.739511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.739546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.739558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.743242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.743277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.743288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.746825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.280 [2024-12-05 11:20:00.746862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.280 [2024-12-05 11:20:00.746872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.280 [2024-12-05 11:20:00.750170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.750205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.750232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.753972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.754008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.754018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.757695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.757730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.757741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.761392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.761430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.761442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.765042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.765080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.765092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.768789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.768826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.768838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.772504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.772540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.772551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.776220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.776257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.776268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.780118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.780155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.780167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.783924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.783959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.783970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.787729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.787763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.787790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.791528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.791564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.791574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.794908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.794942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.794954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.798565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.798613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.798625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.802101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.802139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.802150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.805577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.805623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.805635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.809162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.809199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.809210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.812702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.812737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.812749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.816497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.816534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.816545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.820416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.820453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.820465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.824215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.824252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.824263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.827941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.827978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.828005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.831485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.831522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.831533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.835383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.835421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.835432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.839268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.839307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.839318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.843033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.843071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.281 [2024-12-05 11:20:00.843082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.281 [2024-12-05 11:20:00.846909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.281 [2024-12-05 11:20:00.846946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.846957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.850675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.850708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.850719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.854132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.854169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.854180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.857854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.857894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.857905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.861606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.861652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.861663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.865348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.865385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.865397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.869017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.869055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.869066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.872634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.872669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.872680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.876075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.876110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.876122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.879624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.879658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.879668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.883214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.883249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.883260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.886971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.887007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.887017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.890237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.890270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.890297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.894058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.894094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.894121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:36.282 [2024-12-05 11:20:00.897785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b3ce00) 00:36:36.282 [2024-12-05 11:20:00.897820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.282 [2024-12-05 11:20:00.897831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:36.282 8784.50 IOPS, 1098.06 MiB/s 00:36:36.282 Latency(us) 00:36:36.282 [2024-12-05T11:20:00.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.282 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:36.282 nvme0n1 : 2.00 8783.29 1097.91 0.00 0.00 1818.73 507.12 9799.19 00:36:36.282 [2024-12-05T11:20:00.934Z] =================================================================================================================== 00:36:36.282 [2024-12-05T11:20:00.934Z] Total : 8783.29 1097.91 0.00 0.00 1818.73 507.12 9799.19 00:36:36.282 { 00:36:36.282 "results": [ 00:36:36.282 { 00:36:36.282 "job": "nvme0n1", 00:36:36.282 "core_mask": "0x2", 00:36:36.282 "workload": "randread", 00:36:36.282 "status": "finished", 00:36:36.282 "queue_depth": 16, 00:36:36.282 "io_size": 131072, 00:36:36.282 "runtime": 2.002097, 00:36:36.282 "iops": 8783.290719680415, 00:36:36.282 "mibps": 1097.911339960052, 00:36:36.282 "io_failed": 0, 00:36:36.282 "io_timeout": 0, 00:36:36.282 "avg_latency_us": 1818.7278183516794, 00:36:36.282 "min_latency_us": 507.12380952380954, 00:36:36.282 "max_latency_us": 9799.192380952381 00:36:36.282 } 00:36:36.282 ], 00:36:36.282 "core_count": 1 00:36:36.282 } 00:36:36.282 11:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:36.282 11:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:36.282 11:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:36.282 | .driver_specific 00:36:36.282 | .nvme_error 00:36:36.282 | .status_code 00:36:36.282 | .command_transient_transport_error' 00:36:36.282 11:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 567 > 0 )) 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95149 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95149 ']' 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95149 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95149 00:36:36.849 killing process with pid 95149 00:36:36.849 Received shutdown signal, test time was about 2.000000 seconds 00:36:36.849 00:36:36.849 Latency(us) 00:36:36.849 [2024-12-05T11:20:01.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.849 [2024-12-05T11:20:01.501Z] =================================================================================================================== 00:36:36.849 [2024-12-05T11:20:01.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95149' 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95149 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95149 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95239 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95239 /var/tmp/bperf.sock 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95239 ']' 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:36.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.849 11:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:37.106 [2024-12-05 11:20:01.541582] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:37.106 [2024-12-05 11:20:01.541915] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95239 ] 00:36:37.106 [2024-12-05 11:20:01.691443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.106 [2024-12-05 11:20:01.741694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:38.041 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:38.610 nvme0n1 00:36:38.610 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:38.610 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.610 11:20:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.610 11:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.610 11:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:38.610 11:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:38.610 Running I/O for 2 seconds... 00:36:38.610 [2024-12-05 11:20:03.114344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eefae0 00:36:38.610 [2024-12-05 11:20:03.115395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.115440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.125377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016edece0 00:36:38.610 [2024-12-05 11:20:03.126942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.126976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.131910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee73e0 00:36:38.610 [2024-12-05 11:20:03.132719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.132753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.142866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eff3c8 00:36:38.610 [2024-12-05 11:20:03.144224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.144270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.149546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7970 00:36:38.610 [2024-12-05 11:20:03.150119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.150151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.160500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eed920 00:36:38.610 [2024-12-05 11:20:03.161605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.161634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.169046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0ea0 00:36:38.610 [2024-12-05 11:20:03.169913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.169948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.177959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee8d30 00:36:38.610 [2024-12-05 11:20:03.178904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.178955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.190322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efc560 00:36:38.610 [2024-12-05 11:20:03.191950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.191991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.197408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef9b30 00:36:38.610 [2024-12-05 11:20:03.198096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.198131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.208628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeb760 00:36:38.610 [2024-12-05 11:20:03.209892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.209926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.217440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee3060 00:36:38.610 [2024-12-05 11:20:03.218407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.218445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.226787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeaef0 00:36:38.610 [2024-12-05 11:20:03.227729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.227763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.238105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efa3a0 00:36:38.610 [2024-12-05 11:20:03.239545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.239577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.244747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efbcf0 00:36:38.610 [2024-12-05 11:20:03.245497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.245530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:38.610 [2024-12-05 11:20:03.255756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee95a0 00:36:38.610 [2024-12-05 11:20:03.257185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.610 [2024-12-05 11:20:03.257221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.265336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee5220 00:36:38.869 [2024-12-05 11:20:03.266496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.266535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.274826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eed0b0 00:36:38.869 [2024-12-05 11:20:03.275816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.275851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.286483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef81e0 00:36:38.869 [2024-12-05 11:20:03.288252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.288288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.293782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efdeb0 00:36:38.869 [2024-12-05 11:20:03.294622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.294656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.303777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee99d8 00:36:38.869 [2024-12-05 11:20:03.304685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.304724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.313858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eed0b0 00:36:38.869 [2024-12-05 11:20:03.314727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.314763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.325337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4de8 00:36:38.869 [2024-12-05 11:20:03.326610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.326642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.334707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef5378 00:36:38.869 [2024-12-05 11:20:03.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.869 [2024-12-05 11:20:03.336006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:38.869 [2024-12-05 11:20:03.343612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef8e88 00:36:38.869 [2024-12-05 11:20:03.344900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.344932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.352964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef3a28 00:36:38.870 [2024-12-05 11:20:03.354016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.354047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.361825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee01f8 00:36:38.870 [2024-12-05 11:20:03.362763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.362791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.370778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee01f8 00:36:38.870 [2024-12-05 11:20:03.371687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.371718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.381793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0ea0 00:36:38.870 [2024-12-05 11:20:03.383213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.383243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.388294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee1710 00:36:38.870 [2024-12-05 11:20:03.388983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.389012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.397580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef46d0 00:36:38.870 [2024-12-05 11:20:03.398378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.398410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.409470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee3060 00:36:38.870 [2024-12-05 11:20:03.410836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.410868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.416577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef1430 00:36:38.870 [2024-12-05 11:20:03.417216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.417247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.428117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee23b8 00:36:38.870 [2024-12-05 11:20:03.429370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.429400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.437683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeb328 00:36:38.870 [2024-12-05 11:20:03.438696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.448197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee95a0 00:36:38.870 [2024-12-05 11:20:03.449228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.449262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.460059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efc128 00:36:38.870 [2024-12-05 11:20:03.461574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.461613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.467004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efef90 00:36:38.870 [2024-12-05 11:20:03.467704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.467736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.478309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee7c50 00:36:38.870 [2024-12-05 11:20:03.479465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.479497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.487133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef9f68 00:36:38.870 [2024-12-05 11:20:03.488042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.488090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.496159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee7c50 00:36:38.870 [2024-12-05 11:20:03.497160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.497191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.507412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efef90 00:36:38.870 [2024-12-05 11:20:03.508872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.508900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:38.870 [2024-12-05 11:20:03.513983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efc128 00:36:38.870 [2024-12-05 11:20:03.514701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:38.870 [2024-12-05 11:20:03.514729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.525227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee95a0 00:36:39.130 [2024-12-05 11:20:03.526443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.526471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.533660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeaef0 00:36:39.130 [2024-12-05 11:20:03.535113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.535144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.543705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efd208 00:36:39.130 [2024-12-05 11:20:03.544465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.544493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.552393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016edfdc0 00:36:39.130 [2024-12-05 11:20:03.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.553050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.561107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef5be8 00:36:39.130 [2024-12-05 11:20:03.561629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.561652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.571661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eef270 00:36:39.130 [2024-12-05 11:20:03.572789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.572820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.580321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef8618 00:36:39.130 [2024-12-05 11:20:03.581359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.581387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.589076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eed4e8 00:36:39.130 [2024-12-05 11:20:03.589954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.589983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.597745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee01f8 00:36:39.130 [2024-12-05 11:20:03.598479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.598506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.606412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee73e0 00:36:39.130 [2024-12-05 11:20:03.607049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.617893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee12d8 00:36:39.130 [2024-12-05 11:20:03.619328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.619356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.626740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efbcf0 00:36:39.130 [2024-12-05 11:20:03.628002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.628036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.635766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7da8 00:36:39.130 [2024-12-05 11:20:03.637110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.637140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.644558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efd208 00:36:39.130 [2024-12-05 11:20:03.645607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.645645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.653558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeee38 00:36:39.130 [2024-12-05 11:20:03.654661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.654707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.662456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0630 00:36:39.130 [2024-12-05 11:20:03.663317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.663348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.671549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef20d8 00:36:39.130 [2024-12-05 11:20:03.672361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.672387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.682739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef9f68 00:36:39.130 [2024-12-05 11:20:03.684165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.684193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.689784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efd208 00:36:39.130 [2024-12-05 11:20:03.690472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.690501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.701426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee88f8 00:36:39.130 [2024-12-05 11:20:03.702633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.710034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efac10 00:36:39.130 [2024-12-05 11:20:03.710882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.710910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.719071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee8088 00:36:39.130 [2024-12-05 11:20:03.719992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.720019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:39.130 [2024-12-05 11:20:03.730216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee6300 00:36:39.130 [2024-12-05 11:20:03.731608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.130 [2024-12-05 11:20:03.731634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:39.131 [2024-12-05 11:20:03.736940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4578 00:36:39.131 [2024-12-05 11:20:03.737567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.131 [2024-12-05 11:20:03.737601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:39.131 [2024-12-05 11:20:03.747974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efb048 00:36:39.131 [2024-12-05 11:20:03.749236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.131 [2024-12-05 11:20:03.749266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:39.131 [2024-12-05 11:20:03.756657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eed0b0 00:36:39.131 [2024-12-05 11:20:03.757643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.131 [2024-12-05 11:20:03.757673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:39.131 [2024-12-05 11:20:03.766210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efb048 00:36:39.131 [2024-12-05 11:20:03.767137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.131 [2024-12-05 11:20:03.767164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:39.131 [2024-12-05 11:20:03.777643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4578 00:36:39.131 [2024-12-05 11:20:03.779083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.131 [2024-12-05 11:20:03.779110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.784506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee6300 00:36:39.391 [2024-12-05 11:20:03.785320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.785349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.795865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee8088 00:36:39.391 [2024-12-05 11:20:03.797156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.797184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.804411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ede038 00:36:39.391 [2024-12-05 11:20:03.805423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.805452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.813344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee88f8 00:36:39.391 [2024-12-05 11:20:03.814331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.814358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.824284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efd208 00:36:39.391 [2024-12-05 11:20:03.825797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.825822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.830766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef9f68 00:36:39.391 [2024-12-05 11:20:03.831516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.831543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.841668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0630 00:36:39.391 [2024-12-05 11:20:03.842942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.842970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.850150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef20d8 00:36:39.391 [2024-12-05 11:20:03.851199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.851227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.858997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeee38 00:36:39.391 [2024-12-05 11:20:03.860069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.860097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.867428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efd208 00:36:39.391 [2024-12-05 11:20:03.868267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.868295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.876284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7da8 00:36:39.391 [2024-12-05 11:20:03.877107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.877135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.887200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efac10 00:36:39.391 [2024-12-05 11:20:03.888547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.391 [2024-12-05 11:20:03.888574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:39.391 [2024-12-05 11:20:03.893774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef6890 00:36:39.391 [2024-12-05 11:20:03.894363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.894389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.904755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef81e0 00:36:39.392 [2024-12-05 11:20:03.905926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.905953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.913294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4578 00:36:39.392 [2024-12-05 11:20:03.914157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.914185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.922154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016edf988 00:36:39.392 [2024-12-05 11:20:03.923043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.923070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.933302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eed0b0 00:36:39.392 [2024-12-05 11:20:03.934713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.934739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.939842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ede8a8 00:36:39.392 [2024-12-05 11:20:03.940554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.940582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.951044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef2d80 00:36:39.392 [2024-12-05 11:20:03.952271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.952299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.959752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee6300 00:36:39.392 [2024-12-05 11:20:03.960770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.960800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.968763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef8a50 00:36:39.392 [2024-12-05 11:20:03.969747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.969774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.979894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ede038 00:36:39.392 [2024-12-05 11:20:03.981503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.981533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.986566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efa3a0 00:36:39.392 [2024-12-05 11:20:03.987307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.987333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:03.997453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeff18 00:36:39.392 [2024-12-05 11:20:03.998701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:03.998727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:04.005910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef9f68 00:36:39.392 [2024-12-05 11:20:04.006919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:04.006947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:04.014799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efc128 00:36:39.392 [2024-12-05 11:20:04.015808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:04.015835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:04.025816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef20d8 00:36:39.392 [2024-12-05 11:20:04.027342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:04.027371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:04.032312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef6458 00:36:39.392 [2024-12-05 11:20:04.033095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:04.033122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:36:39.392 [2024-12-05 11:20:04.043211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efd208 00:36:39.392 [2024-12-05 11:20:04.044518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.392 [2024-12-05 11:20:04.044545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:39.652 [2024-12-05 11:20:04.049919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef6cc8 00:36:39.652 [2024-12-05 11:20:04.050466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.652 [2024-12-05 11:20:04.050488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:36:39.652 [2024-12-05 11:20:04.060853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef57b0 00:36:39.652 [2024-12-05 11:20:04.061925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.652 [2024-12-05 11:20:04.061952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:39.652 [2024-12-05 11:20:04.069368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef6890 00:36:39.652 [2024-12-05 11:20:04.070188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.652 [2024-12-05 11:20:04.070216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:39.652 [2024-12-05 11:20:04.078307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efc998 00:36:39.652 [2024-12-05 11:20:04.079170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.652 [2024-12-05 11:20:04.079197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:39.652 [2024-12-05 11:20:04.089216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4578 00:36:39.652 [2024-12-05 11:20:04.090599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.652 [2024-12-05 11:20:04.090626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:39.652 [2024-12-05 11:20:04.095691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efdeb0 00:36:39.652 [2024-12-05 11:20:04.096323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.096350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.104962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eec408 00:36:39.653 27159.00 IOPS, 106.09 MiB/s [2024-12-05T11:20:04.305Z] [2024-12-05 11:20:04.105643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.105671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.116106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef4298 00:36:39.653 [2024-12-05 11:20:04.116933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.116960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.125111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4140 00:36:39.653 [2024-12-05 11:20:04.125848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.125876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.133946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eef6a8 00:36:39.653 [2024-12-05 11:20:04.134447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.134472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.144435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef2d80 00:36:39.653 [2024-12-05 11:20:04.145610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.145648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.153137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeff18 00:36:39.653 [2024-12-05 11:20:04.154172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.154199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.161556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efa7d8 00:36:39.653 [2024-12-05 11:20:04.162439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.162468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.170441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeaef0 00:36:39.653 [2024-12-05 11:20:04.171343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.171371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.181358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efe2e8 00:36:39.653 [2024-12-05 11:20:04.182785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.182812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.187996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef8e88 00:36:39.653 [2024-12-05 11:20:04.188680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.188706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.198934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef3e60 00:36:39.653 [2024-12-05 11:20:04.200133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.200160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.207411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efef90 00:36:39.653 [2024-12-05 11:20:04.208438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.208468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.216495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016edfdc0 00:36:39.653 [2024-12-05 11:20:04.217506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.217534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.227454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efb480 00:36:39.653 [2024-12-05 11:20:04.229056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.229085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.234191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee5ec8 00:36:39.653 [2024-12-05 11:20:04.234931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.234958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.245259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efcdd0 00:36:39.653 [2024-12-05 11:20:04.246550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.246577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.253878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef1430 00:36:39.653 [2024-12-05 11:20:04.254889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.254909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.262776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7970 00:36:39.653 [2024-12-05 11:20:04.263798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.263822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.271694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7100 00:36:39.653 [2024-12-05 11:20:04.272637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.272668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:39.653 [2024-12-05 11:20:04.281803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeea00 00:36:39.653 [2024-12-05 11:20:04.282611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.653 [2024-12-05 11:20:04.282638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:39.654 [2024-12-05 11:20:04.293246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0a68 00:36:39.654 [2024-12-05 11:20:04.294732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.654 [2024-12-05 11:20:04.294760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:39.654 [2024-12-05 11:20:04.300472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef1430 00:36:39.654 [2024-12-05 11:20:04.301134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.654 [2024-12-05 11:20:04.301178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.312107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eecc78 00:36:39.913 [2024-12-05 11:20:04.313302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.313332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.321579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4de8 00:36:39.913 [2024-12-05 11:20:04.322549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.322581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.331044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee8d30 00:36:39.913 [2024-12-05 11:20:04.331909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.331936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.342094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efa7d8 00:36:39.913 [2024-12-05 11:20:04.343475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.343503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.348700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efef90 00:36:39.913 [2024-12-05 11:20:04.349327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.349354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.359850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee7818 00:36:39.913 [2024-12-05 11:20:04.361118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.361148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.368758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef8e88 00:36:39.913 [2024-12-05 11:20:04.369764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.369795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.378162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee7818 00:36:39.913 [2024-12-05 11:20:04.379149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.379179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.389824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efef90 00:36:39.913 [2024-12-05 11:20:04.391266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.391293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.396641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efa7d8 00:36:39.913 [2024-12-05 11:20:04.397397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.397425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:39.913 [2024-12-05 11:20:04.407886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee8d30 00:36:39.913 [2024-12-05 11:20:04.409223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.913 [2024-12-05 11:20:04.409253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.416757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee5ec8 00:36:39.914 [2024-12-05 11:20:04.417848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.417876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.425757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eecc78 00:36:39.914 [2024-12-05 11:20:04.426748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.426775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.436747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef1430 00:36:39.914 [2024-12-05 11:20:04.438406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.438432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.443524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0a68 00:36:39.914 [2024-12-05 11:20:04.444408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.444437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.455246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7100 00:36:39.914 [2024-12-05 11:20:04.456667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.456697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.465033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeea00 00:36:39.914 [2024-12-05 11:20:04.466251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.466286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.475497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7970 00:36:39.914 [2024-12-05 11:20:04.476789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.476819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.485084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef1430 00:36:39.914 [2024-12-05 11:20:04.486025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.486057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.495135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efcdd0 00:36:39.914 [2024-12-05 11:20:04.496105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.496134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.506562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4de8 00:36:39.914 [2024-12-05 11:20:04.507917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.507947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.513074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee6738 00:36:39.914 [2024-12-05 11:20:04.513674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.513702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.524016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eebfd0 00:36:39.914 [2024-12-05 11:20:04.525064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.525095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.532765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef35f0 00:36:39.914 [2024-12-05 11:20:04.533715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.533744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.541680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeaef0 00:36:39.914 [2024-12-05 11:20:04.542415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.542441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.550531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee49b0 00:36:39.914 [2024-12-05 11:20:04.551144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.551166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:39.914 [2024-12-05 11:20:04.562747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef0ff8 00:36:39.914 [2024-12-05 11:20:04.564324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:39.914 [2024-12-05 11:20:04.564355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:40.174 [2024-12-05 11:20:04.571018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef8618 00:36:40.174 [2024-12-05 11:20:04.571719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.571749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.580850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef6458 00:36:40.175 [2024-12-05 11:20:04.581961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.581989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.590482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eea680 00:36:40.175 [2024-12-05 11:20:04.591496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.591523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.599369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efc128 00:36:40.175 [2024-12-05 11:20:04.600321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.600350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.608222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eefae0 00:36:40.175 [2024-12-05 11:20:04.609051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.609079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.617258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef7100 00:36:40.175 [2024-12-05 11:20:04.618019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.618049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.629078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef9b30 00:36:40.175 [2024-12-05 11:20:04.630016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.630046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.637939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee2c28 00:36:40.175 [2024-12-05 11:20:04.638604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.638634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.647278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee5220 00:36:40.175 [2024-12-05 11:20:04.648324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.648357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.657161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee6738 00:36:40.175 [2024-12-05 11:20:04.658212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.658241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.668586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016edf988 00:36:40.175 [2024-12-05 11:20:04.670143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.670172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.675442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4578 00:36:40.175 [2024-12-05 11:20:04.676199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.676228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.686699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef6890 00:36:40.175 [2024-12-05 11:20:04.687978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.688007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.695547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef20d8 00:36:40.175 [2024-12-05 11:20:04.696645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.696676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.704985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efc560 00:36:40.175 [2024-12-05 11:20:04.706097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.706130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.714693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eee190 00:36:40.175 [2024-12-05 11:20:04.715745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.715792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.724486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efe2e8 00:36:40.175 [2024-12-05 11:20:04.725332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.733309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee49b0 00:36:40.175 [2024-12-05 11:20:04.733959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.733988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.742103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee1b48 00:36:40.175 [2024-12-05 11:20:04.742677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.742706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.752878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef4f40 00:36:40.175 [2024-12-05 11:20:04.754073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.754106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.761951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eec408 00:36:40.175 [2024-12-05 11:20:04.762975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.763009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.770807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee23b8 00:36:40.175 [2024-12-05 11:20:04.771711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.771747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.779745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eea680 00:36:40.175 [2024-12-05 11:20:04.780523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.780558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.788811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeee38 00:36:40.175 [2024-12-05 11:20:04.789517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.789554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.801079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4de8 00:36:40.175 [2024-12-05 11:20:04.802764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.802800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.808029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016edf988 00:36:40.175 [2024-12-05 11:20:04.808912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.808946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:40.175 [2024-12-05 11:20:04.819564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef6020 00:36:40.175 [2024-12-05 11:20:04.820970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.175 [2024-12-05 11:20:04.821007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.828693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef35f0 00:36:40.436 [2024-12-05 11:20:04.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.829925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.838050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee27f0 00:36:40.436 [2024-12-05 11:20:04.839192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.839228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.846982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efb480 00:36:40.436 [2024-12-05 11:20:04.847852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.847889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.856398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeaef0 00:36:40.436 [2024-12-05 11:20:04.857196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.857234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.865462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeb760 00:36:40.436 [2024-12-05 11:20:04.866066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.866096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.876521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee4578 00:36:40.436 [2024-12-05 11:20:04.877887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.877922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.885272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0630 00:36:40.436 [2024-12-05 11:20:04.886270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.886306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.894397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efda78 00:36:40.436 [2024-12-05 11:20:04.895429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.895464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.905605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee8d30 00:36:40.436 [2024-12-05 11:20:04.907134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.907167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.912149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ede470 00:36:40.436 [2024-12-05 11:20:04.912986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.913020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.922154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efac10 00:36:40.436 [2024-12-05 11:20:04.923108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.923142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.931955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef46d0 00:36:40.436 [2024-12-05 11:20:04.932912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.932946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.941204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee88f8 00:36:40.436 [2024-12-05 11:20:04.942056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.942088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.950682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efdeb0 00:36:40.436 [2024-12-05 11:20:04.951333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.951364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.962110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee84c0 00:36:40.436 [2024-12-05 11:20:04.963264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.963298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.970960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eff3c8 00:36:40.436 [2024-12-05 11:20:04.971998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.972037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.979677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efa7d8 00:36:40.436 [2024-12-05 11:20:04.980585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.990673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eefae0 00:36:40.436 [2024-12-05 11:20:04.992239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.992267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:04.997200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0ea0 00:36:40.436 [2024-12-05 11:20:04.997869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:04.997896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:05.008772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee3498 00:36:40.436 [2024-12-05 11:20:05.010343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:05.010374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:40.436 [2024-12-05 11:20:05.018192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016efe720 00:36:40.436 [2024-12-05 11:20:05.019566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.436 [2024-12-05 11:20:05.019610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:40.437 [2024-12-05 11:20:05.028096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee2c28 00:36:40.437 [2024-12-05 11:20:05.029480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.437 [2024-12-05 11:20:05.029512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:36:40.437 [2024-12-05 11:20:05.037884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016eeee38 00:36:40.437 [2024-12-05 11:20:05.039025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.437 [2024-12-05 11:20:05.039056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:40.437 [2024-12-05 11:20:05.047080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee0a68 00:36:40.437 [2024-12-05 11:20:05.048044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.437 [2024-12-05 11:20:05.048089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:36:40.437 [2024-12-05 11:20:05.056731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ee9168 00:36:40.437 [2024-12-05 11:20:05.057682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.437 [2024-12-05 11:20:05.057713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:40.437 [2024-12-05 11:20:05.066282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef4b08 00:36:40.437 [2024-12-05 11:20:05.066985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.437 [2024-12-05 11:20:05.067013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:40.437 [2024-12-05 11:20:05.075290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef46d0 00:36:40.437 [2024-12-05 11:20:05.075867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.437 [2024-12-05 11:20:05.075893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:40.437 [2024-12-05 11:20:05.086678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef8618 00:36:40.437 [2024-12-05 11:20:05.087929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.437 [2024-12-05 11:20:05.087964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:40.708 [2024-12-05 11:20:05.096295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef31b8 00:36:40.708 [2024-12-05 11:20:05.097362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.708 [2024-12-05 11:20:05.097392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:40.708 [2024-12-05 11:20:05.106693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec4fa0) with pdu=0x200016ef4298 00:36:40.708 26956.50 IOPS, 105.30 MiB/s [2024-12-05T11:20:05.360Z] [2024-12-05 11:20:05.107788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:40.708 [2024-12-05 11:20:05.107819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:40.708 00:36:40.708 Latency(us) 00:36:40.708 [2024-12-05T11:20:05.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.708 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:40.708 nvme0n1 : 2.01 26973.98 105.37 0.00 0.00 4739.77 1872.46 13044.78 00:36:40.708 [2024-12-05T11:20:05.360Z] =================================================================================================================== 00:36:40.708 [2024-12-05T11:20:05.360Z] Total : 26973.98 105.37 0.00 0.00 4739.77 1872.46 13044.78 00:36:40.708 { 00:36:40.708 "results": [ 00:36:40.708 { 00:36:40.708 "job": "nvme0n1", 00:36:40.708 "core_mask": "0x2", 00:36:40.708 "workload": "randwrite", 00:36:40.708 "status": "finished", 00:36:40.708 "queue_depth": 128, 00:36:40.708 "io_size": 4096, 00:36:40.708 "runtime": 2.006897, 00:36:40.708 "iops": 26973.980229179673, 00:36:40.708 "mibps": 105.3671102702331, 00:36:40.708 "io_failed": 0, 00:36:40.708 "io_timeout": 0, 00:36:40.708 "avg_latency_us": 4739.766819761192, 00:36:40.708 "min_latency_us": 1872.4571428571428, 00:36:40.708 "max_latency_us": 13044.784761904762 00:36:40.708 } 00:36:40.708 ], 00:36:40.708 "core_count": 1 00:36:40.708 } 00:36:40.708 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:40.708 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:40.708 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:40.708 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:40.708 | .driver_specific 00:36:40.708 | .nvme_error 00:36:40.708 | .status_code 00:36:40.708 | .command_transient_transport_error' 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95239 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95239 ']' 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95239 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95239 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:40.965 killing process with pid 95239 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95239' 00:36:40.965 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95239 00:36:40.965 Received shutdown signal, test time was about 2.000000 seconds 00:36:40.965 00:36:40.965 Latency(us) 00:36:40.965 [2024-12-05T11:20:05.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.965 [2024-12-05T11:20:05.618Z] =================================================================================================================== 00:36:40.966 [2024-12-05T11:20:05.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:40.966 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95239 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95324 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95324 /var/tmp/bperf.sock 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95324 ']' 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.223 11:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.223 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:41.223 Zero copy mechanism will not be used. 00:36:41.223 [2024-12-05 11:20:05.692018] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:41.223 [2024-12-05 11:20:05.692148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95324 ] 00:36:41.223 [2024-12-05 11:20:05.837464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.480 [2024-12-05 11:20:05.901875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.480 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.480 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:41.480 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.480 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:41.737 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:41.737 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.737 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.737 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.737 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.737 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:41.994 nvme0n1 00:36:41.994 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:41.994 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.994 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:41.994 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.994 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:41.994 11:20:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.994 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:41.994 Zero copy mechanism will not be used. 00:36:41.994 Running I/O for 2 seconds... 00:36:41.994 [2024-12-05 11:20:06.632643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:41.994 [2024-12-05 11:20:06.632756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.994 [2024-12-05 11:20:06.632786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:41.994 [2024-12-05 11:20:06.636771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:41.994 [2024-12-05 11:20:06.636913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.994 [2024-12-05 11:20:06.636937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:41.994 [2024-12-05 11:20:06.640529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:41.994 [2024-12-05 11:20:06.640723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.994 [2024-12-05 11:20:06.640748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:41.994 [2024-12-05 11:20:06.644346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:41.994 [2024-12-05 11:20:06.644498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.995 [2024-12-05 11:20:06.644518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:41.995 [2024-12-05 11:20:06.648115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:41.995 [2024-12-05 11:20:06.648268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.995 [2024-12-05 11:20:06.648305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.652118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.652260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.652290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.655942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.656095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.656115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.659734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.659886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.659907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.663537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.663703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.663725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.667279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.667419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.667440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.671095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.671240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.671260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.674901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.675075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.675096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.678621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.678784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.678805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.682337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.682481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.686090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.686213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.686233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.689984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.690191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.690218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.693775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.693931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.693951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.254 [2024-12-05 11:20:06.697557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.254 [2024-12-05 11:20:06.697733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.254 [2024-12-05 11:20:06.697754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.701316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.701462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.701489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.705090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.705207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.705228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.708732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.708884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.708904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.712489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.712620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.712641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.716239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.716396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.716417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.720087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.720242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.720263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.723904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.724063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.724085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.727623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.727746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.727767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.731384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.731510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.731530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.735160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.735300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.735320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.739001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.739149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.739170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.742735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.742896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.742916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.746492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.746673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.746694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.750260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.750416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.750437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.754078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.754213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.754233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.757937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.758059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.758080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.761759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.761890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.761911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.765667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.765826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.765846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.769542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.769705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.769726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.773481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.773644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.773666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.777288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.777437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.777458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.781069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.781216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.781237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.784933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.785098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.785120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.788849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.788992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.789014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.792634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.792772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.792792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.796512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.796707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.255 [2024-12-05 11:20:06.796734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.255 [2024-12-05 11:20:06.800328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.255 [2024-12-05 11:20:06.800486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.800507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.804167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.804313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.804333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.808041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.808209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.808230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.811906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.812060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.812080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.815687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.815859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.815885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.819451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.819615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.819637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.823369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.823532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.823552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.827402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.827535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.827555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.831358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.831505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.831526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.835241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.835399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.835420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.839105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.839245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.839265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.842895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.843018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.843040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.846775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.846915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.846936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.850584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.850750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.850770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.854348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.854502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.854522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.858193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.858346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.858368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.861967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.862127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.862147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.865750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.865888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.865909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.869448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.869622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.869643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.873235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.873383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.873404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.877004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.877162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.877183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.880835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.880958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.880978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.884610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.884771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.884791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.888412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.888577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.256 [2024-12-05 11:20:06.888611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.256 [2024-12-05 11:20:06.892174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.256 [2024-12-05 11:20:06.892333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.257 [2024-12-05 11:20:06.892353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.257 [2024-12-05 11:20:06.895903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.257 [2024-12-05 11:20:06.896075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.257 [2024-12-05 11:20:06.896095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.257 [2024-12-05 11:20:06.899238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.257 [2024-12-05 11:20:06.899576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.257 [2024-12-05 11:20:06.899616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.257 [2024-12-05 11:20:06.902755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.257 [2024-12-05 11:20:06.903096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.257 [2024-12-05 11:20:06.903125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.257 [2024-12-05 11:20:06.906255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.257 [2024-12-05 11:20:06.906609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.257 [2024-12-05 11:20:06.906633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.909817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.910186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.910215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.913346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.913752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.913778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.917092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.917465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.917491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.920665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.921053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.921081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.924259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.924674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.924701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.927899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.928289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.928317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.931556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.931971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.931999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.935267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.935648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.935683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.938930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.939257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.939283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.942509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.942872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.942898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.946037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.946377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.946403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.949591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.949968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.949995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.953269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.953635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.953659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.956949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.957326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.957352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.960887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.961297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.961335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.964901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.965323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.965370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.968850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.969246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.969275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.972750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.973143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.973172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.976584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.977014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.977043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.980389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.980801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.980829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.984256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.984660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.984688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.988049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.988483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.988519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.991786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.992161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.515 [2024-12-05 11:20:06.992197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.515 [2024-12-05 11:20:06.995441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.515 [2024-12-05 11:20:06.995794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:06.995827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:06.999139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:06.999489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:06.999522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.002834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.003178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.003212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.006433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.006795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.006821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.010145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.010518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.010551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.013841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.014186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.014213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.017384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.017778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.017811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.021141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.021513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.021541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.024798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.025185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.025215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.028408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.028819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.028848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.032006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.032389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.032418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.035758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.036142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.036168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.039383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.039761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.042988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.043322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.043348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.046513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.046865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.046890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.050125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.050462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.050488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.053833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.054172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.054197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.057426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.057840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.057866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.061153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.061514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.061540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.064911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.065292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.065317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.068580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.068995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.069023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.072219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.072616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.072657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.075914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.076314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.076342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.079605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.079996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.080023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.083430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.083850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.083878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.087211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.087537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.087563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.090969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.091316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.516 [2024-12-05 11:20:07.091347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.516 [2024-12-05 11:20:07.094648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.516 [2024-12-05 11:20:07.095017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.095054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.098399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.098814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.098840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.102217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.102584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.102623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.106077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.106497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.106524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.109997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.110395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.110430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.113763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.114128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.114154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.117624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.118011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.118037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.121384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.121784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.121812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.125230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.125577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.125614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.128997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.129407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.129435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.132731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.133118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.133145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.136979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.137356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.137383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.140649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.141019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.141047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.144279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.144675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.144702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.147885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.148264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.148291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.151520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.151903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.151929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.155255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.155628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.155652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.158813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.159177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.159203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.162391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.162759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.162784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.517 [2024-12-05 11:20:07.165983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.517 [2024-12-05 11:20:07.166315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.517 [2024-12-05 11:20:07.166342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.779 [2024-12-05 11:20:07.169702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.170047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.170075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.173590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.174003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.174030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.177487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.177880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.177905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.181183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.181553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.181581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.185015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.185387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.185413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.188723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.189100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.189127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.192432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.192812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.192839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.196362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.196739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.196778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.199807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.200128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.200155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.203200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.203533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.206724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.207042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.207068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.210234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.210566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.210603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.213787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.214101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.214127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.217302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.217669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.217695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.221033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.221392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.221426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.224583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.224933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.224960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.228023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.228398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.228425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.231607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.231932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.231959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.235110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.235416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.235442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.238552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.238885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.238913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.242065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.242372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.242398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.245631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.245951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.245977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.780 [2024-12-05 11:20:07.249050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.780 [2024-12-05 11:20:07.249383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.780 [2024-12-05 11:20:07.249409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.252548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.252928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.252956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.255942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.256282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.256309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.259312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.259629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.259654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.262650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.262959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.262986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.265987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.266317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.266343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.269291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.269608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.269633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.272606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.272921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.272947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.275917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.276259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.276286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.279325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.279657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.279681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.282765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.283080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.283106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.286290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.286633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.286658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.289784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.290101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.290128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.293363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.293728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.293752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.296894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.297223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.297250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.300420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.300797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.300841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.303986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.304345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.304371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.307522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.307900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.307927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.310989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.311346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.311373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.314539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.314898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.314925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.318053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.318371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.318397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.321503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.321838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.321864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.325071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.325396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.781 [2024-12-05 11:20:07.325422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.781 [2024-12-05 11:20:07.328512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.781 [2024-12-05 11:20:07.328859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.328888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.331850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.332204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.332231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.335293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.335674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.335700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.339014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.339386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.339413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.342673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.342988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.343014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.346069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.346367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.346392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.349533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.349873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.349899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.352933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.353271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.356439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.356785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.356813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.359822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.360150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.360175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.363199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.363521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.363548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.366674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.367000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.367028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.370120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.370445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.370471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.373919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.374443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.374482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.377463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.377775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.377803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.380779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.381081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.381122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.384081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.384420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.384450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.387484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.387758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.387779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.391114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.391581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.391636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.394821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.395054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.395077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.398128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.398323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.401416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.401555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.782 [2024-12-05 11:20:07.401578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.782 [2024-12-05 11:20:07.404801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.782 [2024-12-05 11:20:07.405022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.405046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.783 [2024-12-05 11:20:07.408179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.783 [2024-12-05 11:20:07.408402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.408426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.783 [2024-12-05 11:20:07.411486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.783 [2024-12-05 11:20:07.411713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.411735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.783 [2024-12-05 11:20:07.414800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.783 [2024-12-05 11:20:07.415035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.415057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:42.783 [2024-12-05 11:20:07.418195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.783 [2024-12-05 11:20:07.418365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.418386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:42.783 [2024-12-05 11:20:07.421455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.783 [2024-12-05 11:20:07.421699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.421722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:42.783 [2024-12-05 11:20:07.424857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.783 [2024-12-05 11:20:07.425176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.425205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:42.783 [2024-12-05 11:20:07.428213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:42.783 [2024-12-05 11:20:07.428421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.783 [2024-12-05 11:20:07.428445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.431483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.431707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.431729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.434834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.435023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.435045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.438051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.438254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.438275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.441258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.441498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.441520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.444413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.444668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.444690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.447528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.447748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.447769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.450829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.451040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.451061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.454090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.454292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.454312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.457356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.457516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.457536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.460501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.460743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.460766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.463686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.463900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.463920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.466987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.467135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.467156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.470207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.470374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.470394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.473349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.473513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.473533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.476518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.476677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.476698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.479709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.479908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.479928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.482883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.483059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.483079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.486167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.486305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.486326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.489500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.489701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.489746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.492857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.493022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.493046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.496135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.496307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.496332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.499519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.499691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.499713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.503008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.503090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.043 [2024-12-05 11:20:07.503115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.043 [2024-12-05 11:20:07.506401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.043 [2024-12-05 11:20:07.506530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.506552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.509688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.509825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.509846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.512946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.513107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.513129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.516296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.516452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.516474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.519736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.519881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.519903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.523201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.523285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.523306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.526786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.526860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.526883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.530217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.530333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.530353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.533633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.533714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.533736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.536951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.537069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.537090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.540407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.540561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.540603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.543960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.544109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.544131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.547491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.547671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.547694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.551047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.551134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.551156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.554679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.554825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.558258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.558325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.558347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.561855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.562026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.562047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.565377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.565524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.565552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.568879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.568943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.568966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.572351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.572451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.572473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.575895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.575958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.575980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.579216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.579275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.579296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.582556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.582646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.582668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.585850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.585972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.585992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.589319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.589436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.589463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.592734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.592862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.592889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.596145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.596280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.596306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.599523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.599695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.599722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.602949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.603040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.603062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.606374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.606459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.606480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.609783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.609915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.609936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.613117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.613219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.613239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.616567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.616696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.616718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.619876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.619945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.619966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.623285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.623369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.623391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.626750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.626833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.626853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 8561.00 IOPS, 1070.12 MiB/s [2024-12-05T11:20:07.696Z] [2024-12-05 11:20:07.631350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.631438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.631460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.634762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.634880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.634901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.638081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.638163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.638184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.641443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.641525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.641546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.644962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.645075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.645097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.648446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.648508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.648531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.651752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.651885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.044 [2024-12-05 11:20:07.651906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.044 [2024-12-05 11:20:07.655010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.044 [2024-12-05 11:20:07.655135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.655162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.658343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.658414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.658436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.661692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.661777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.661798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.665002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.665104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.665125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.668154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.668258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.668278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.671378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.671516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.671537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.674653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.674752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.674772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.677914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.677997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.678017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.681273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.681357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.681378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.684535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.684648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.684670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.687779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.687853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.687873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.691132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.691240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.691262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.045 [2024-12-05 11:20:07.694487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.045 [2024-12-05 11:20:07.694638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.045 [2024-12-05 11:20:07.694659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.697938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.698032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.698054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.701287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.701410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.701432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.704734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.704812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.704834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.708120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.708276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.708298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.711375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.711452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.714738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.714846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.714867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.718018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.718106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.718127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.721169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.721306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.721326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.724363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.724459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.724480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.727538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.727716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.727736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.730840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.730930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.730952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.734180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.734328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.737495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.737616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.737638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.740785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.740925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.740946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.743969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.744056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.744094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.747244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.747407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.747427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.750504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.750652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.750673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.753776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.753880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.753900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.757142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.757223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.757244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.304 [2024-12-05 11:20:07.760431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.304 [2024-12-05 11:20:07.760517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.304 [2024-12-05 11:20:07.760538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.763768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.763825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.763847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.766951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.767019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.767040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.770173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.770314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.770336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.773279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.773399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.773420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.776444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.776541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.776562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.779584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.779728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.779749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.782757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.782859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.782881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.786023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.786161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.786183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.789136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.789224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.789245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.792310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.792399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.792419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.795554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.795685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.795707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.798730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.798846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.798867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.802022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.802397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.802437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.806049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.806368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.806411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.809481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.809563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.809587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.812656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.812867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.812895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.815861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.815970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.815992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.819139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.819191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.819213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.822394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.822458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.822480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.825684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.825746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.825768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.828895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.829012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.829034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.832102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.832186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.832210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.835498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.835579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.835617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.838799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.838862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.838884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.842025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.842078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.305 [2024-12-05 11:20:07.842099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.305 [2024-12-05 11:20:07.845343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.305 [2024-12-05 11:20:07.845396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.845418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.848597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.848684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.848707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.851792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.851887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.851908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.855093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.855146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.855167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.858404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.858461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.858482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.861636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.861709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.861731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.864899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.864977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.864997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.868137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.868207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.868227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.871399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.871457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.871477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.874662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.874729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.874749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.877942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.878013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.878034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.881120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.881174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.881195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.884298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.884405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.884426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.887627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.887681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.887702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.890949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.891018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.891039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.894237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.894291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.894311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.897639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.897744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.897764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.900883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.900955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.900976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.904154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.904282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.907434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.907537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.907557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.910800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.910890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.910910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.914015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.914067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.914088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.917272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.917347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.920463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.920849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.920881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.923833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.923977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.923999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.927004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.927156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.927177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.930055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.930204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.930226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.306 [2024-12-05 11:20:07.933303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.306 [2024-12-05 11:20:07.933446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.306 [2024-12-05 11:20:07.933468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.307 [2024-12-05 11:20:07.936499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.307 [2024-12-05 11:20:07.936644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.307 [2024-12-05 11:20:07.936666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.307 [2024-12-05 11:20:07.939689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.307 [2024-12-05 11:20:07.939836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.307 [2024-12-05 11:20:07.939856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.307 [2024-12-05 11:20:07.942805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.307 [2024-12-05 11:20:07.942953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.307 [2024-12-05 11:20:07.942974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.307 [2024-12-05 11:20:07.945908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.307 [2024-12-05 11:20:07.946068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.307 [2024-12-05 11:20:07.946089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.307 [2024-12-05 11:20:07.949031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.307 [2024-12-05 11:20:07.949164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.307 [2024-12-05 11:20:07.949185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.307 [2024-12-05 11:20:07.952215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.307 [2024-12-05 11:20:07.952282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.307 [2024-12-05 11:20:07.952303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.307 [2024-12-05 11:20:07.955561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.307 [2024-12-05 11:20:07.955672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.307 [2024-12-05 11:20:07.955693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.567 [2024-12-05 11:20:07.958874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.567 [2024-12-05 11:20:07.958955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.567 [2024-12-05 11:20:07.958975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.567 [2024-12-05 11:20:07.962045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.567 [2024-12-05 11:20:07.962170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.567 [2024-12-05 11:20:07.962190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.567 [2024-12-05 11:20:07.965206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.965280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.965301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.968407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.968496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.968516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.971642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.971786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.971806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.974820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.974893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.974914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.978040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.978116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.978137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.981260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.981342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.981363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.984529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.984652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.984675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.987780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.987851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.987871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.991051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.991143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.991163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.994302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.994408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.994430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:07.997558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:07.997646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:07.997667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.000741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.000844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.000871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.004080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.004156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.004177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.007503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.007579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.007612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.010810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.010893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.010920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.014046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.014182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.017400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.017526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.017552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.020620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.020704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.020731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.023897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.023970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.023991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.027182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.027275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.027295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.030448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.030527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.030554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.033734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.033813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.033839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.036911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.036998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.037024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.040092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.040245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.040272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.043388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.043507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.043533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.046731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.046811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.046832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.049987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.568 [2024-12-05 11:20:08.050060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.568 [2024-12-05 11:20:08.050081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.568 [2024-12-05 11:20:08.053120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.053212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.053239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.056334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.056421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.056448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.059607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.059725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.059751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.062911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.062988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.063009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.066169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.066247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.066267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.069335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.069440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.069466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.072598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.072683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.072706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.075879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.075970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.075990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.079183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.079276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.079302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.082477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.082559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.082598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.085707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.085792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.085818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.088918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.089041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.089067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.092153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.092234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.092260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.095385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.095527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.095552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.098706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.098807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.102010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.102150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.102176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.105114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.105200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.105226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.108385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.108503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.108529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.111653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.111743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.111769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.114923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.115044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.115069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.118120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.118221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.118248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.121272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.121364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.121385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.124462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.124600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.124621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.127721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.127825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.130913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.130989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.131016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.134023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.134103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.134130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.569 [2024-12-05 11:20:08.137218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.569 [2024-12-05 11:20:08.137291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.569 [2024-12-05 11:20:08.137312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.140348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.140424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.140444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.143484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.143552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.143573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.146760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.146846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.146867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.149875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.149965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.149991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.153015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.153093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.153120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.156207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.156327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.156353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.159453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.159574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.159617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.162889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.163028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.163056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.166324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.166402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.166422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.169760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.169866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.169886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.173051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.173181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.173207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.176430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.176506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.176535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.179879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.179962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.179990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.183188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.183283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.183310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.186653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.186750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.186776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.190054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.190148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.190174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.193503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.193569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.193589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.196928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.197022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.197044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.200272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.200381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.200410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.203531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.203612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.203633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.206864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.206957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.206977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.210194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.210333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.210355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.213641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.213721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.213745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.570 [2024-12-05 11:20:08.217111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.570 [2024-12-05 11:20:08.217209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.570 [2024-12-05 11:20:08.217230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.220422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.220569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.220608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.223726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.223811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.223839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.227062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.227152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.227179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.230389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.230471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.230491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.233634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.233712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.233737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.237061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.237195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.237220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.240278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.240414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.240439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.243508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.243582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.243619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.246776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.246863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.246890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.250287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.250363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.250389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.253762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.253859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.253885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.257321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.257414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.257435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.260703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.260812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.260834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.264109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.264191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.264218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.267531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.267645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.267672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.270942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.271036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.271061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.274298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.274428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.274454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.277750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.277832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.277859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.281218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.281319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.281346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.284709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.284791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.284818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.287974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.288115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.288141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.291311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.291386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.291413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.294659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.294761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.294786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.298091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.298188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.298216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.301505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.301590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.301611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.304883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.304979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.305001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.831 [2024-12-05 11:20:08.308314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.831 [2024-12-05 11:20:08.308398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.831 [2024-12-05 11:20:08.308421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.311660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.311754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.311774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.315126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.315201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.315228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.318547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.318639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.318662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.321988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.322078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.322098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.325345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.325459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.325480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.328806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.328910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.328932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.332143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.332254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.332281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.335438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.335514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.335534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.338858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.338942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.338963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.342227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.342322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.342344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.345724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.345818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.345844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.349025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.349149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.349175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.352361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.352491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.352519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.355801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.355881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.355903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.359192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.359264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.359284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.362580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.362744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.362771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.366065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.366166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.366193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.369437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.369570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.369605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.372852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.372933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.372961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.376301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.376386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.376410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.379682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.379750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.379770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.382999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.383088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.383114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.386320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.386413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.386440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.389714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.389829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.389855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.393002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.393105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.393131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.396254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.396390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.396418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.832 [2024-12-05 11:20:08.399640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.832 [2024-12-05 11:20:08.399766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.832 [2024-12-05 11:20:08.399792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.403025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.403156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.403181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.406319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.406401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.406427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.409623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.409698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.409724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.412864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.412964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.412990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.416118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.416209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.416237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.419399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.419532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.419559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.422764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.422851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.422877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.426138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.426228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.426255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.429482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.429614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.429639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.432738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.432817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.432843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.435950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.436087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.436130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.439249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.439340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.439367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.442553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.442658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.442683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.445842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.445918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.445944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.449092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.449193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.449219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.452285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.452366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.452388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.455510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.455614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.455635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.458920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.459051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.459077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.462185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.462279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.462305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.465404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.465474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.465500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.468780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.468857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.468885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.472017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.472167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.472193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.475317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.475405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.475430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.478624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.478716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.478741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:43.833 [2024-12-05 11:20:08.481970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:43.833 [2024-12-05 11:20:08.482045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.833 [2024-12-05 11:20:08.482065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.485366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.485443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.485463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.488621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.488697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.488724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.491773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.491855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.491881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.495030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.495102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.495128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.498319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.498405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.498432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.501703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.501931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.501980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.505304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.505530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.505564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.508655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.508833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.508867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.511872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.512111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.512146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.515116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.515276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.515307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.518393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.518575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.518616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.521607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.521822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.521852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.524743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.524959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.525001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.527977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.528202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.528229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.531291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.531458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.531484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.534622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.534817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.534847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.537900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.538106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.538136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.541165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.541327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.541356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.544324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.094 [2024-12-05 11:20:08.544501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.094 [2024-12-05 11:20:08.544549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.094 [2024-12-05 11:20:08.547630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.547812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.547841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.550943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.551082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.551108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.554275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.554473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.554503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.557741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.557970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.558002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.561171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.561397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.561428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.564672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.564926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.564959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.568159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.568420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.568453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.571749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.571961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.571989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.575409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.575641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.575677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.578964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.579154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.579199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.582572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.582739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.582765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.586089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.586265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.586292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.589624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.589835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.589875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.593084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.593255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.593285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.596543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.596757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.596801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.599999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.600237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.600270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.603428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.603680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.603711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.606787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.607018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.607049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.610109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.610333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.610363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.613569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.613798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.613828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.616908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.617100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.617129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.620162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.620326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.620354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.623386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.623570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.623625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:44.095 [2024-12-05 11:20:08.626736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ec52e0) with pdu=0x200016eff3c8 00:36:44.095 [2024-12-05 11:20:08.626980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:44.095 [2024-12-05 11:20:08.627012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:44.095 8961.50 IOPS, 1120.19 MiB/s 00:36:44.095 Latency(us) 00:36:44.095 [2024-12-05T11:20:08.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.095 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:44.095 nvme0n1 : 2.00 8958.40 1119.80 0.00 0.00 1782.48 1115.67 7645.87 00:36:44.095 [2024-12-05T11:20:08.747Z] =================================================================================================================== 00:36:44.095 [2024-12-05T11:20:08.747Z] Total : 8958.40 1119.80 0.00 0.00 1782.48 1115.67 7645.87 00:36:44.095 { 00:36:44.095 "results": [ 00:36:44.095 { 00:36:44.095 "job": "nvme0n1", 00:36:44.095 "core_mask": "0x2", 00:36:44.095 "workload": "randwrite", 00:36:44.095 "status": "finished", 00:36:44.095 "queue_depth": 16, 00:36:44.095 "io_size": 131072, 00:36:44.095 "runtime": 2.003149, 00:36:44.095 "iops": 8958.395007061381, 00:36:44.095 "mibps": 1119.7993758826726, 00:36:44.096 "io_failed": 0, 00:36:44.096 "io_timeout": 0, 00:36:44.096 "avg_latency_us": 1782.4839424166435, 00:36:44.096 "min_latency_us": 1115.672380952381, 00:36:44.096 "max_latency_us": 7645.866666666667 00:36:44.096 } 00:36:44.096 ], 00:36:44.096 "core_count": 1 00:36:44.096 } 00:36:44.096 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:44.096 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:44.096 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:44.096 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:44.096 | .driver_specific 00:36:44.096 | .nvme_error 00:36:44.096 | .status_code 00:36:44.096 | .command_transient_transport_error' 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 579 > 0 )) 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95324 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95324 ']' 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95324 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95324 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:44.355 killing process with pid 95324 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95324' 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95324 00:36:44.355 Received shutdown signal, test time was about 2.000000 seconds 00:36:44.355 00:36:44.355 Latency(us) 00:36:44.355 [2024-12-05T11:20:09.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.355 [2024-12-05T11:20:09.007Z] =================================================================================================================== 00:36:44.355 [2024-12-05T11:20:09.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.355 11:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95324 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95032 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95032 ']' 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95032 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95032 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:44.613 killing process with pid 95032 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95032' 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95032 00:36:44.613 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95032 00:36:44.872 00:36:44.872 real 0m16.673s 00:36:44.872 user 0m31.578s 00:36:44.872 sys 0m5.088s 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:44.872 ************************************ 00:36:44.872 END TEST nvmf_digest_error 00:36:44.872 ************************************ 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:44.872 rmmod nvme_tcp 00:36:44.872 rmmod nvme_fabrics 00:36:44.872 rmmod nvme_keyring 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 95032 ']' 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 95032 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 95032 ']' 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 95032 00:36:44.872 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (95032) - No such process 00:36:44.872 Process with pid 95032 is not found 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 95032 is not found' 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:36:44.872 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:36:44.873 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:44.873 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:44.873 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:44.873 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:36:45.132 ************************************ 00:36:45.132 END TEST nvmf_digest 00:36:45.132 ************************************ 00:36:45.132 00:36:45.132 real 0m34.230s 00:36:45.132 user 1m3.123s 00:36:45.132 sys 0m10.632s 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.132 ************************************ 00:36:45.132 START TEST nvmf_mdns_discovery 00:36:45.132 ************************************ 00:36:45.132 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:36:45.394 * Looking for test storage... 00:36:45.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:45.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.394 --rc genhtml_branch_coverage=1 00:36:45.394 --rc genhtml_function_coverage=1 00:36:45.394 --rc genhtml_legend=1 00:36:45.394 --rc geninfo_all_blocks=1 00:36:45.394 --rc geninfo_unexecuted_blocks=1 00:36:45.394 00:36:45.394 ' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:45.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.394 --rc genhtml_branch_coverage=1 00:36:45.394 --rc genhtml_function_coverage=1 00:36:45.394 --rc genhtml_legend=1 00:36:45.394 --rc geninfo_all_blocks=1 00:36:45.394 --rc geninfo_unexecuted_blocks=1 00:36:45.394 00:36:45.394 ' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:45.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.394 --rc genhtml_branch_coverage=1 00:36:45.394 --rc genhtml_function_coverage=1 00:36:45.394 --rc genhtml_legend=1 00:36:45.394 --rc geninfo_all_blocks=1 00:36:45.394 --rc geninfo_unexecuted_blocks=1 00:36:45.394 00:36:45.394 ' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:45.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:45.394 --rc genhtml_branch_coverage=1 00:36:45.394 --rc genhtml_function_coverage=1 00:36:45.394 --rc genhtml_legend=1 00:36:45.394 --rc geninfo_all_blocks=1 00:36:45.394 --rc geninfo_unexecuted_blocks=1 00:36:45.394 00:36:45.394 ' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:45.394 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@50 -- # : 0 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:36:45.395 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:45.395 11:20:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@121 -- # return 0 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # ips=() 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:36:45.395 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:45.655 10.0.0.1 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:36:45.655 10.0.0.2 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:36:45.655 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # ips=() 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:36:45.656 10.0.0.3 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:36:45.656 10.0.0.4 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:36:45.656 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:45.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:45.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:36:45.917 00:36:45.917 --- 10.0.0.1 ping statistics --- 00:36:45.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:45.917 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:45.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:45.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:36:45.917 00:36:45.917 --- 10.0.0.2 ping statistics --- 00:36:45.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:45.917 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:45.917 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:36:45.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:45.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:36:45.918 00:36:45.918 --- 10.0.0.3 ping statistics --- 00:36:45.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:45.918 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:36:45.918 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:45.918 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.193 ms 00:36:45.918 00:36:45.918 --- 10.0.0.4 ping statistics --- 00:36:45.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:45.918 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@281 -- # return 0 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:36:45.918 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target0 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target1 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:36:45.919 ' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@328 -- # nvmfpid=95656 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@329 -- # waitforlisten 95656 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95656 ']' 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.919 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.179 [2024-12-05 11:20:10.595730] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:46.179 [2024-12-05 11:20:10.595825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:46.179 [2024-12-05 11:20:10.748160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.179 [2024-12-05 11:20:10.808600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:46.179 [2024-12-05 11:20:10.808661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:46.179 [2024-12-05 11:20:10.808677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:46.179 [2024-12-05 11:20:10.808690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:46.179 [2024-12-05 11:20:10.808702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:46.179 [2024-12-05 11:20:10.809077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 [2024-12-05 11:20:11.003329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 [2024-12-05 11:20:11.015512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 null0 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 null1 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 null2 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 null3 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95698 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95698 /tmp/host.sock 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95698 ']' 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:46.439 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:46.439 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:46.700 [2024-12-05 11:20:11.129607] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:46.700 [2024-12-05 11:20:11.129708] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95698 ] 00:36:46.700 [2024-12-05 11:20:11.279568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.700 [2024-12-05 11:20:11.326540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95709 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=target0,target1\nuse-ipv4=yes\nuse-ipv6=no' 00:36:46.958 11:20:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_ns_spdk avahi-daemon -f /dev/fd/63 00:36:46.958 Process 1059 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:36:46.959 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:36:46.959 Successfully dropped root privileges. 00:36:46.959 avahi-daemon 0.8 starting up. 00:36:46.959 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:36:46.959 Successfully called chroot(). 00:36:46.959 Successfully dropped remaining capabilities. 00:36:46.959 No service file found in /etc/avahi/services. 00:36:47.892 Joining mDNS multicast group on interface target1.IPv4 with address 10.0.0.4. 00:36:47.892 New relevant interface target1.IPv4 for mDNS. 00:36:47.892 Joining mDNS multicast group on interface target0.IPv4 with address 10.0.0.2. 00:36:47.892 New relevant interface target0.IPv4 for mDNS. 00:36:47.892 Network interface enumeration completed. 00:36:47.892 Registering new address record for fe80::d07d:ffff:fe85:cc2b on target1.*. 00:36:47.892 Registering new address record for 10.0.0.4 on target1.IPv4. 00:36:47.892 Registering new address record for fe80::401:3fff:fe12:f6f on target0.*. 00:36:47.892 Registering new address record for 10.0.0.2 on target0.IPv4. 00:36:47.892 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2966812612. 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.151 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 [2024-12-05 11:20:12.862136] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 [2024-12-05 11:20:12.903870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.411 11:20:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:36:49.347 [2024-12-05 11:20:13.762153] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:36:49.605 [2024-12-05 11:20:14.162185] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:36:49.606 [2024-12-05 11:20:14.162247] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:36:49.606 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:49.606 cookie is 0 00:36:49.606 is_local: 1 00:36:49.606 our_own: 0 00:36:49.606 wide_area: 0 00:36:49.606 multicast: 1 00:36:49.606 cached: 1 00:36:49.864 [2024-12-05 11:20:14.262148] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:36:49.864 [2024-12-05 11:20:14.262172] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:36:49.864 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:49.864 cookie is 0 00:36:49.864 is_local: 1 00:36:49.864 our_own: 0 00:36:49.864 wide_area: 0 00:36:49.864 multicast: 1 00:36:49.864 cached: 1 00:36:50.801 [2024-12-05 11:20:15.163325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.801 [2024-12-05 11:20:15.163414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f68850 with addr=10.0.0.4, port=8009 00:36:50.801 [2024-12-05 11:20:15.163462] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:50.801 [2024-12-05 11:20:15.163479] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:50.801 [2024-12-05 11:20:15.163492] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:36:50.801 [2024-12-05 11:20:15.268395] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:50.801 [2024-12-05 11:20:15.268425] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:50.801 [2024-12-05 11:20:15.268442] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:50.801 [2024-12-05 11:20:15.354491] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:36:50.801 [2024-12-05 11:20:15.409025] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:36:50.801 [2024-12-05 11:20:15.409991] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f9da10:1 started. 00:36:50.801 [2024-12-05 11:20:15.412018] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:36:50.802 [2024-12-05 11:20:15.412070] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:50.802 [2024-12-05 11:20:15.417032] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f9da10 was disconnected and freed. delete nvme_qpair. 00:36:51.738 [2024-12-05 11:20:16.163131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.738 [2024-12-05 11:20:16.163208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f865e0 with addr=10.0.0.4, port=8009 00:36:51.738 [2024-12-05 11:20:16.163237] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:51.738 [2024-12-05 11:20:16.163249] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:51.738 [2024-12-05 11:20:16.163259] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:36:52.671 [2024-12-05 11:20:17.163152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.671 [2024-12-05 11:20:17.163212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f867c0 with addr=10.0.0.4, port=8009 00:36:52.671 [2024-12-05 11:20:17.163241] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:52.671 [2024-12-05 11:20:17.163252] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:52.671 [2024-12-05 11:20:17.163262] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:36:53.640 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:36:53.640 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:53.640 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:53.640 [2024-12-05 11:20:17.995519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:36:53.640 [2024-12-05 11:20:17.997040] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:53.640 [2024-12-05 11:20:17.997082] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.640 11:20:17 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:53.640 [2024-12-05 11:20:18.003436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:36:53.640 [2024-12-05 11:20:18.004005] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:53.640 11:20:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.640 11:20:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:36:53.640 [2024-12-05 11:20:18.135100] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:53.640 [2024-12-05 11:20:18.135134] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:53.640 [2024-12-05 11:20:18.169469] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:36:53.640 [2024-12-05 11:20:18.169496] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:36:53.640 [2024-12-05 11:20:18.169511] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:36:53.640 [2024-12-05 11:20:18.222583] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:53.640 [2024-12-05 11:20:18.255539] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:36:53.899 [2024-12-05 11:20:18.309871] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:36:53.899 [2024-12-05 11:20:18.310455] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x1f9ab00:1 started. 00:36:53.899 [2024-12-05 11:20:18.312015] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:36:53.899 [2024-12-05 11:20:18.312067] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:36:53.899 [2024-12-05 11:20:18.318216] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x1f9ab00 was disconnected and freed. delete nvme_qpair. 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:36:54.465 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:36:54.465 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:36:54.465 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:36:54.465 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:54.465 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:54.465 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:54.465 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:36:54.465 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:54.724 [2024-12-05 11:20:19.162212] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:36:54.724 [2024-12-05 11:20:19.162252] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:36:54.724 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:54.724 cookie is 0 00:36:54.724 is_local: 1 00:36:54.724 our_own: 0 00:36:54.724 wide_area: 0 00:36:54.724 multicast: 1 00:36:54.724 cached: 1 00:36:54.724 [2024-12-05 11:20:19.162267] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.724 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:54.983 [2024-12-05 11:20:19.411234] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f9cf20:1 started. 00:36:54.983 [2024-12-05 11:20:19.414654] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x2023350:1 started. 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.983 11:20:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:36:54.983 [2024-12-05 11:20:19.419168] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f9cf20 was disconnected and freed. delete nvme_qpair. 00:36:54.983 [2024-12-05 11:20:19.419535] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x2023350 was disconnected and freed. delete nvme_qpair. 00:36:54.983 [2024-12-05 11:20:19.462229] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:36:54.983 [2024-12-05 11:20:19.462252] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:36:54.983 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:36:54.983 cookie is 0 00:36:54.983 is_local: 1 00:36:54.983 our_own: 0 00:36:54.983 wide_area: 0 00:36:54.983 multicast: 1 00:36:54.983 cached: 1 00:36:54.983 [2024-12-05 11:20:19.462267] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:55.920 [2024-12-05 11:20:20.532824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:55.920 [2024-12-05 11:20:20.533896] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:55.920 [2024-12-05 11:20:20.533939] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:55.920 [2024-12-05 11:20:20.533974] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:36:55.920 [2024-12-05 11:20:20.533987] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:55.920 [2024-12-05 11:20:20.540675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:36:55.920 [2024-12-05 11:20:20.540884] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:55.920 [2024-12-05 11:20:20.540923] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.920 11:20:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:36:56.179 [2024-12-05 11:20:20.671965] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:36:56.179 [2024-12-05 11:20:20.672382] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:36:56.179 [2024-12-05 11:20:20.733427] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:36:56.179 [2024-12-05 11:20:20.733501] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:36:56.179 [2024-12-05 11:20:20.733513] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:56.179 [2024-12-05 11:20:20.733520] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:56.179 [2024-12-05 11:20:20.733539] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:56.179 [2024-12-05 11:20:20.734102] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:36:56.179 [2024-12-05 11:20:20.734166] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:36:56.179 [2024-12-05 11:20:20.734175] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:36:56.179 [2024-12-05 11:20:20.734181] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:36:56.179 [2024-12-05 11:20:20.734196] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:36:56.179 [2024-12-05 11:20:20.779165] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:56.179 [2024-12-05 11:20:20.779192] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:56.179 [2024-12-05 11:20:20.780159] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:36:56.179 [2024-12-05 11:20:20.780173] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:57.116 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:57.378 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.378 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:57.378 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:36:57.378 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:57.379 [2024-12-05 11:20:21.841658] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:57.379 [2024-12-05 11:20:21.841692] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:57.379 [2024-12-05 11:20:21.841737] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:36:57.379 [2024-12-05 11:20:21.841748] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:57.379 [2024-12-05 11:20:21.850139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.850175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.850187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.850196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.850207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.850216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.850225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.850234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.850244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.379 [2024-12-05 11:20:21.853656] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:57.379 [2024-12-05 11:20:21.853698] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:36:57.379 [2024-12-05 11:20:21.853838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.853858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.853868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.853877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.853887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.853895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.853905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:57.379 [2024-12-05 11:20:21.853915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:57.379 [2024-12-05 11:20:21.853924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.379 11:20:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:36:57.379 [2024-12-05 11:20:21.860090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.379 [2024-12-05 11:20:21.863809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.379 [2024-12-05 11:20:21.870106] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.379 [2024-12-05 11:20:21.870122] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.379 [2024-12-05 11:20:21.870128] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.379 [2024-12-05 11:20:21.870134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.379 [2024-12-05 11:20:21.870161] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.379 [2024-12-05 11:20:21.870232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.379 [2024-12-05 11:20:21.870248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.379 [2024-12-05 11:20:21.870258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.379 [2024-12-05 11:20:21.870271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.379 [2024-12-05 11:20:21.870284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.379 [2024-12-05 11:20:21.870293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.379 [2024-12-05 11:20:21.870303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.379 [2024-12-05 11:20:21.870311] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.379 [2024-12-05 11:20:21.870318] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.379 [2024-12-05 11:20:21.870324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.379 [2024-12-05 11:20:21.873814] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.379 [2024-12-05 11:20:21.873848] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.379 [2024-12-05 11:20:21.873854] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.379 [2024-12-05 11:20:21.873859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.379 [2024-12-05 11:20:21.873882] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.379 [2024-12-05 11:20:21.873931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.379 [2024-12-05 11:20:21.873945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.379 [2024-12-05 11:20:21.873955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.379 [2024-12-05 11:20:21.873967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.379 [2024-12-05 11:20:21.873981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.380 [2024-12-05 11:20:21.873990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.380 [2024-12-05 11:20:21.873999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.380 [2024-12-05 11:20:21.874006] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.380 [2024-12-05 11:20:21.874012] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.380 [2024-12-05 11:20:21.874017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.380 [2024-12-05 11:20:21.880169] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.380 [2024-12-05 11:20:21.880185] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.380 [2024-12-05 11:20:21.880190] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.880196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.380 [2024-12-05 11:20:21.880216] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.880254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.380 [2024-12-05 11:20:21.880268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.380 [2024-12-05 11:20:21.880277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.380 [2024-12-05 11:20:21.880289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.380 [2024-12-05 11:20:21.880302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.380 [2024-12-05 11:20:21.880310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.380 [2024-12-05 11:20:21.880319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.380 [2024-12-05 11:20:21.880326] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.380 [2024-12-05 11:20:21.880332] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.380 [2024-12-05 11:20:21.880337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.380 [2024-12-05 11:20:21.883889] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.380 [2024-12-05 11:20:21.883903] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.380 [2024-12-05 11:20:21.883909] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.883914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.380 [2024-12-05 11:20:21.883949] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.883985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.380 [2024-12-05 11:20:21.883997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.380 [2024-12-05 11:20:21.884006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.380 [2024-12-05 11:20:21.884019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.380 [2024-12-05 11:20:21.884039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.380 [2024-12-05 11:20:21.884047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.380 [2024-12-05 11:20:21.884056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.380 [2024-12-05 11:20:21.884063] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.380 [2024-12-05 11:20:21.884069] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.380 [2024-12-05 11:20:21.884074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.380 [2024-12-05 11:20:21.890223] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.380 [2024-12-05 11:20:21.890238] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.380 [2024-12-05 11:20:21.890244] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.890249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.380 [2024-12-05 11:20:21.890268] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.890305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.380 [2024-12-05 11:20:21.890318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.380 [2024-12-05 11:20:21.890327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.380 [2024-12-05 11:20:21.890339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.380 [2024-12-05 11:20:21.890356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.380 [2024-12-05 11:20:21.890365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.380 [2024-12-05 11:20:21.890374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.380 [2024-12-05 11:20:21.890381] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.380 [2024-12-05 11:20:21.890386] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.380 [2024-12-05 11:20:21.890391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.380 [2024-12-05 11:20:21.893957] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.380 [2024-12-05 11:20:21.893972] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.380 [2024-12-05 11:20:21.893977] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.893982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.380 [2024-12-05 11:20:21.894002] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.380 [2024-12-05 11:20:21.894037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.380 [2024-12-05 11:20:21.894050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.380 [2024-12-05 11:20:21.894059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.380 [2024-12-05 11:20:21.894071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.380 [2024-12-05 11:20:21.894083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.380 [2024-12-05 11:20:21.894091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.380 [2024-12-05 11:20:21.894100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.380 [2024-12-05 11:20:21.894108] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.380 [2024-12-05 11:20:21.894113] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.380 [2024-12-05 11:20:21.894118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.380 [2024-12-05 11:20:21.900276] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.380 [2024-12-05 11:20:21.900296] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.381 [2024-12-05 11:20:21.900302] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.900307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.381 [2024-12-05 11:20:21.900329] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.900371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.381 [2024-12-05 11:20:21.900385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.381 [2024-12-05 11:20:21.900394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.381 [2024-12-05 11:20:21.900407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.381 [2024-12-05 11:20:21.900419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.381 [2024-12-05 11:20:21.900429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.381 [2024-12-05 11:20:21.900438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.381 [2024-12-05 11:20:21.900445] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.381 [2024-12-05 11:20:21.900450] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.381 [2024-12-05 11:20:21.900455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.381 [2024-12-05 11:20:21.904010] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.381 [2024-12-05 11:20:21.904026] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.381 [2024-12-05 11:20:21.904038] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.904043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.381 [2024-12-05 11:20:21.904076] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.904114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.381 [2024-12-05 11:20:21.904128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.381 [2024-12-05 11:20:21.904138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.381 [2024-12-05 11:20:21.904150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.381 [2024-12-05 11:20:21.904162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.381 [2024-12-05 11:20:21.904170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.381 [2024-12-05 11:20:21.904179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.381 [2024-12-05 11:20:21.904186] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.381 [2024-12-05 11:20:21.904192] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.381 [2024-12-05 11:20:21.904196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.381 [2024-12-05 11:20:21.910336] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.381 [2024-12-05 11:20:21.910351] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.381 [2024-12-05 11:20:21.910357] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.910362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.381 [2024-12-05 11:20:21.910382] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.910424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.381 [2024-12-05 11:20:21.910437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.381 [2024-12-05 11:20:21.910446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.381 [2024-12-05 11:20:21.910458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.381 [2024-12-05 11:20:21.910470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.381 [2024-12-05 11:20:21.910479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.381 [2024-12-05 11:20:21.910488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.381 [2024-12-05 11:20:21.910495] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.381 [2024-12-05 11:20:21.910500] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.381 [2024-12-05 11:20:21.910505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.381 [2024-12-05 11:20:21.914083] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.381 [2024-12-05 11:20:21.914099] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.381 [2024-12-05 11:20:21.914104] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.914110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.381 [2024-12-05 11:20:21.914129] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.914166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.381 [2024-12-05 11:20:21.914196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.381 [2024-12-05 11:20:21.914206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.381 [2024-12-05 11:20:21.914219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.381 [2024-12-05 11:20:21.914232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.381 [2024-12-05 11:20:21.914242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.381 [2024-12-05 11:20:21.914251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.381 [2024-12-05 11:20:21.914259] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.381 [2024-12-05 11:20:21.914265] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.381 [2024-12-05 11:20:21.914270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.381 [2024-12-05 11:20:21.920389] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.381 [2024-12-05 11:20:21.920405] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.381 [2024-12-05 11:20:21.920410] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.920415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.381 [2024-12-05 11:20:21.920435] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.381 [2024-12-05 11:20:21.920471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.381 [2024-12-05 11:20:21.920484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.381 [2024-12-05 11:20:21.920494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.381 [2024-12-05 11:20:21.920506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.381 [2024-12-05 11:20:21.920519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.381 [2024-12-05 11:20:21.920527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.381 [2024-12-05 11:20:21.920538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.382 [2024-12-05 11:20:21.920545] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.382 [2024-12-05 11:20:21.920550] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.382 [2024-12-05 11:20:21.920555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.382 [2024-12-05 11:20:21.924137] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.382 [2024-12-05 11:20:21.924152] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.382 [2024-12-05 11:20:21.924157] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.924163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.382 [2024-12-05 11:20:21.924200] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.924239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.382 [2024-12-05 11:20:21.924253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.382 [2024-12-05 11:20:21.924264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.382 [2024-12-05 11:20:21.924277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.382 [2024-12-05 11:20:21.924297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.382 [2024-12-05 11:20:21.924307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.382 [2024-12-05 11:20:21.924316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.382 [2024-12-05 11:20:21.924324] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.382 [2024-12-05 11:20:21.924330] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.382 [2024-12-05 11:20:21.924335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.382 [2024-12-05 11:20:21.930443] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.382 [2024-12-05 11:20:21.930459] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.382 [2024-12-05 11:20:21.930465] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.930470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.382 [2024-12-05 11:20:21.930489] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.930524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.382 [2024-12-05 11:20:21.930537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.382 [2024-12-05 11:20:21.930546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.382 [2024-12-05 11:20:21.930558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.382 [2024-12-05 11:20:21.930570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.382 [2024-12-05 11:20:21.930578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.382 [2024-12-05 11:20:21.930596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.382 [2024-12-05 11:20:21.930606] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.382 [2024-12-05 11:20:21.930611] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.382 [2024-12-05 11:20:21.930617] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.382 [2024-12-05 11:20:21.934207] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.382 [2024-12-05 11:20:21.934222] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.382 [2024-12-05 11:20:21.934228] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.934233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.382 [2024-12-05 11:20:21.934252] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.934288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.382 [2024-12-05 11:20:21.934318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.382 [2024-12-05 11:20:21.934328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.382 [2024-12-05 11:20:21.934341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.382 [2024-12-05 11:20:21.934362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.382 [2024-12-05 11:20:21.934372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.382 [2024-12-05 11:20:21.934381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.382 [2024-12-05 11:20:21.934389] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.382 [2024-12-05 11:20:21.934395] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.382 [2024-12-05 11:20:21.934401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.382 [2024-12-05 11:20:21.940498] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.382 [2024-12-05 11:20:21.940515] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.382 [2024-12-05 11:20:21.940521] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.940526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.382 [2024-12-05 11:20:21.940547] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.940597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.382 [2024-12-05 11:20:21.940612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.382 [2024-12-05 11:20:21.940622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.382 [2024-12-05 11:20:21.940635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.382 [2024-12-05 11:20:21.940665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.382 [2024-12-05 11:20:21.940674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.382 [2024-12-05 11:20:21.940684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.382 [2024-12-05 11:20:21.940692] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.382 [2024-12-05 11:20:21.940698] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.382 [2024-12-05 11:20:21.940703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.382 [2024-12-05 11:20:21.944261] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.382 [2024-12-05 11:20:21.944284] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.382 [2024-12-05 11:20:21.944290] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.944295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.382 [2024-12-05 11:20:21.944320] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.382 [2024-12-05 11:20:21.944365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.382 [2024-12-05 11:20:21.944381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.382 [2024-12-05 11:20:21.944391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.383 [2024-12-05 11:20:21.944404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.383 [2024-12-05 11:20:21.944426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.383 [2024-12-05 11:20:21.944435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.383 [2024-12-05 11:20:21.944444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.383 [2024-12-05 11:20:21.944452] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.383 [2024-12-05 11:20:21.944458] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.383 [2024-12-05 11:20:21.944463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.383 [2024-12-05 11:20:21.950555] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.383 [2024-12-05 11:20:21.950571] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.383 [2024-12-05 11:20:21.950577] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.950582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.383 [2024-12-05 11:20:21.950627] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.950669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.383 [2024-12-05 11:20:21.950683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.383 [2024-12-05 11:20:21.950694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.383 [2024-12-05 11:20:21.950707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.383 [2024-12-05 11:20:21.950732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.383 [2024-12-05 11:20:21.950740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.383 [2024-12-05 11:20:21.950749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.383 [2024-12-05 11:20:21.950756] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.383 [2024-12-05 11:20:21.950762] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.383 [2024-12-05 11:20:21.950767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.383 [2024-12-05 11:20:21.954326] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.383 [2024-12-05 11:20:21.954342] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.383 [2024-12-05 11:20:21.954348] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.954353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.383 [2024-12-05 11:20:21.954372] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.954409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.383 [2024-12-05 11:20:21.954422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.383 [2024-12-05 11:20:21.954431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.383 [2024-12-05 11:20:21.954449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.383 [2024-12-05 11:20:21.954469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.383 [2024-12-05 11:20:21.954478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.383 [2024-12-05 11:20:21.954487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.383 [2024-12-05 11:20:21.954494] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.383 [2024-12-05 11:20:21.954499] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.383 [2024-12-05 11:20:21.954504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.383 [2024-12-05 11:20:21.960637] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.383 [2024-12-05 11:20:21.960652] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.383 [2024-12-05 11:20:21.960659] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.960664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.383 [2024-12-05 11:20:21.960686] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.960725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.383 [2024-12-05 11:20:21.960739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.383 [2024-12-05 11:20:21.960749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.383 [2024-12-05 11:20:21.960762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.383 [2024-12-05 11:20:21.960790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.383 [2024-12-05 11:20:21.960800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.383 [2024-12-05 11:20:21.960810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.383 [2024-12-05 11:20:21.960818] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.383 [2024-12-05 11:20:21.960824] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.383 [2024-12-05 11:20:21.960829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.383 [2024-12-05 11:20:21.964381] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.383 [2024-12-05 11:20:21.964398] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.383 [2024-12-05 11:20:21.964404] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.964410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.383 [2024-12-05 11:20:21.964431] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.383 [2024-12-05 11:20:21.964471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.383 [2024-12-05 11:20:21.964485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.383 [2024-12-05 11:20:21.964496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.383 [2024-12-05 11:20:21.964509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.383 [2024-12-05 11:20:21.964529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.383 [2024-12-05 11:20:21.964538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.383 [2024-12-05 11:20:21.964548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.383 [2024-12-05 11:20:21.964555] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.383 [2024-12-05 11:20:21.964561] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.383 [2024-12-05 11:20:21.964567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.383 [2024-12-05 11:20:21.970694] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.384 [2024-12-05 11:20:21.970710] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.384 [2024-12-05 11:20:21.970716] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.970721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.384 [2024-12-05 11:20:21.970741] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.970778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.384 [2024-12-05 11:20:21.970791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.384 [2024-12-05 11:20:21.970800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.384 [2024-12-05 11:20:21.970812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.384 [2024-12-05 11:20:21.970839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.384 [2024-12-05 11:20:21.970848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.384 [2024-12-05 11:20:21.970857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.384 [2024-12-05 11:20:21.970864] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.384 [2024-12-05 11:20:21.970869] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.384 [2024-12-05 11:20:21.970874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.384 [2024-12-05 11:20:21.974437] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.384 [2024-12-05 11:20:21.974454] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.384 [2024-12-05 11:20:21.974460] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.974465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.384 [2024-12-05 11:20:21.974485] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.974521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.384 [2024-12-05 11:20:21.974534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.384 [2024-12-05 11:20:21.974543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.384 [2024-12-05 11:20:21.974555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.384 [2024-12-05 11:20:21.974574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.384 [2024-12-05 11:20:21.974583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.384 [2024-12-05 11:20:21.974601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.384 [2024-12-05 11:20:21.974608] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.384 [2024-12-05 11:20:21.974614] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.384 [2024-12-05 11:20:21.974619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.384 [2024-12-05 11:20:21.980749] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:36:57.384 [2024-12-05 11:20:21.980766] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:36:57.384 [2024-12-05 11:20:21.980772] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.980777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:57.384 [2024-12-05 11:20:21.980798] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.980842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.384 [2024-12-05 11:20:21.980856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f7a330 with addr=10.0.0.2, port=4420 00:36:57.384 [2024-12-05 11:20:21.980865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7a330 is same with the state(6) to be set 00:36:57.384 [2024-12-05 11:20:21.980878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7a330 (9): Bad file descriptor 00:36:57.384 [2024-12-05 11:20:21.980905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:57.384 [2024-12-05 11:20:21.980914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:57.384 [2024-12-05 11:20:21.980922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:57.384 [2024-12-05 11:20:21.980930] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:36:57.384 [2024-12-05 11:20:21.980935] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:36:57.384 [2024-12-05 11:20:21.980940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:57.384 [2024-12-05 11:20:21.983907] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:36:57.384 [2024-12-05 11:20:21.983930] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:57.384 [2024-12-05 11:20:21.983950] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:57.384 [2024-12-05 11:20:21.984494] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:36:57.384 [2024-12-05 11:20:21.984512] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:36:57.384 [2024-12-05 11:20:21.984518] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.984524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:36:57.384 [2024-12-05 11:20:21.984547] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:36:57.384 [2024-12-05 11:20:21.984607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:57.384 [2024-12-05 11:20:21.984635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86b40 with addr=10.0.0.4, port=4420 00:36:57.384 [2024-12-05 11:20:21.984646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86b40 is same with the state(6) to be set 00:36:57.385 [2024-12-05 11:20:21.984668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86b40 (9): Bad file descriptor 00:36:57.385 [2024-12-05 11:20:21.984681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:36:57.385 [2024-12-05 11:20:21.984690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:36:57.385 [2024-12-05 11:20:21.984700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:36:57.385 [2024-12-05 11:20:21.984708] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:36:57.385 [2024-12-05 11:20:21.984714] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:36:57.385 [2024-12-05 11:20:21.984720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:36:57.385 [2024-12-05 11:20:21.984922] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:36:57.385 [2024-12-05 11:20:21.984937] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:36:57.385 [2024-12-05 11:20:21.984953] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:36:57.644 [2024-12-05 11:20:22.069977] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:57.644 [2024-12-05 11:20:22.070973] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:36:58.211 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:58.470 11:20:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:58.470 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.730 11:20:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:36:58.730 [2024-12-05 11:20:23.162420] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:36:59.667 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.668 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:59.927 [2024-12-05 11:20:24.369189] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:36:59.927 2024/12/05 11:20:24 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:36:59.927 request: 00:36:59.927 { 00:36:59.927 "method": "bdev_nvme_start_mdns_discovery", 00:36:59.927 "params": { 00:36:59.927 "name": "mdns", 00:36:59.927 "svcname": "_nvme-disc._http", 00:36:59.927 "hostnqn": "nqn.2021-12.io.spdk:test" 00:36:59.927 } 00:36:59.927 } 00:36:59.927 Got JSON-RPC error response 00:36:59.927 GoRPCClient: error on JSON-RPC call 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:59.927 11:20:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:37:00.496 [2024-12-05 11:20:24.953872] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:37:00.496 [2024-12-05 11:20:25.053872] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:37:00.754 [2024-12-05 11:20:25.153884] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:37:00.754 [2024-12-05 11:20:25.153913] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:37:00.755 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:00.755 cookie is 0 00:37:00.755 is_local: 1 00:37:00.755 our_own: 0 00:37:00.755 wide_area: 0 00:37:00.755 multicast: 1 00:37:00.755 cached: 1 00:37:00.755 [2024-12-05 11:20:25.253884] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:37:00.755 [2024-12-05 11:20:25.253904] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:37:00.755 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:00.755 cookie is 0 00:37:00.755 is_local: 1 00:37:00.755 our_own: 0 00:37:00.755 wide_area: 0 00:37:00.755 multicast: 1 00:37:00.755 cached: 1 00:37:00.755 [2024-12-05 11:20:25.253916] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:37:00.755 [2024-12-05 11:20:25.353888] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:37:00.755 [2024-12-05 11:20:25.353916] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:37:00.755 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:00.755 cookie is 0 00:37:00.755 is_local: 1 00:37:00.755 our_own: 0 00:37:00.755 wide_area: 0 00:37:00.755 multicast: 1 00:37:00.755 cached: 1 00:37:01.013 [2024-12-05 11:20:25.453890] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:37:01.013 [2024-12-05 11:20:25.453913] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:37:01.013 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:01.013 cookie is 0 00:37:01.013 is_local: 1 00:37:01.013 our_own: 0 00:37:01.013 wide_area: 0 00:37:01.013 multicast: 1 00:37:01.013 cached: 1 00:37:01.013 [2024-12-05 11:20:25.453924] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:37:01.580 [2024-12-05 11:20:26.160654] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:37:01.580 [2024-12-05 11:20:26.160688] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:37:01.580 [2024-12-05 11:20:26.160702] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:37:01.839 [2024-12-05 11:20:26.246737] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:37:01.839 [2024-12-05 11:20:26.305136] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:37:01.839 [2024-12-05 11:20:26.305683] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x1fa0ad0:1 started. 00:37:01.839 [2024-12-05 11:20:26.307005] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:37:01.839 [2024-12-05 11:20:26.307031] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:37:01.839 [2024-12-05 11:20:26.309474] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x1fa0ad0 was disconnected and freed. delete nvme_qpair. 00:37:01.839 [2024-12-05 11:20:26.360430] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:01.839 [2024-12-05 11:20:26.360451] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:01.839 [2024-12-05 11:20:26.360464] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:01.839 [2024-12-05 11:20:26.446534] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:37:02.097 [2024-12-05 11:20:26.504955] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:37:02.097 [2024-12-05 11:20:26.505464] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2071560:1 started. 00:37:02.097 [2024-12-05 11:20:26.506781] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:37:02.097 [2024-12-05 11:20:26.506807] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:02.097 [2024-12-05 11:20:26.509537] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2071560 was disconnected and freed. delete nvme_qpair. 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.455 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.456 [2024-12-05 11:20:29.560800] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:37:05.456 2024/12/05 11:20:29 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:37:05.456 request: 00:37:05.456 { 00:37:05.456 "method": "bdev_nvme_start_mdns_discovery", 00:37:05.456 "params": { 00:37:05.456 "name": "cdc", 00:37:05.456 "svcname": "_nvme-disc._tcp", 00:37:05.456 "hostnqn": "nqn.2021-12.io.spdk:test" 00:37:05.456 } 00:37:05.456 } 00:37:05.456 Got JSON-RPC error response 00:37:05.456 GoRPCClient: error on JSON-RPC call 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.2 8009 found 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.2 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:37:05.456 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:37:05.456 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:37:05.456 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:37:05.456 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:05.456 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:05.456 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:05.456 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\2* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\2* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\2* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\2* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.456 11:20:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:37:05.456 [2024-12-05 11:20:29.753924] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:37:06.394 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.2 8009 'not found' 00:37:06.394 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:37:06.394 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.2 00:37:06.394 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:37:06.394 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:37:06.394 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:37:06.394 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:37:06.395 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:37:06.395 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:37:06.395 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95698 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95698 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95709 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@99 -- # sync 00:37:06.395 Got SIGTERM, quitting. 00:37:06.395 Leaving mDNS multicast group on interface target1.IPv4 with address 10.0.0.4. 00:37:06.395 Leaving mDNS multicast group on interface target0.IPv4 with address 10.0.0.2. 00:37:06.395 avahi-daemon 0.8 exiting. 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@102 -- # set +e 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:06.395 11:20:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:06.395 rmmod nvme_tcp 00:37:06.395 rmmod nvme_fabrics 00:37:06.395 rmmod nvme_keyring 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@106 -- # set -e 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@107 -- # return 0 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@336 -- # '[' -n 95656 ']' 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@337 -- # killprocess 95656 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 95656 ']' 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 95656 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.395 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95656 00:37:06.654 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:06.654 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:06.654 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95656' 00:37:06.654 killing process with pid 95656 00:37:06.654 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 95656 00:37:06.654 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 95656 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@254 -- # local dev 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:37:06.914 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # continue 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # continue 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@274 -- # iptr 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@548 -- # iptables-save 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:37:06.915 00:37:06.915 real 0m21.780s 00:37:06.915 user 0m41.442s 00:37:06.915 sys 0m3.107s 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:06.915 11:20:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:06.915 ************************************ 00:37:06.915 END TEST nvmf_mdns_discovery 00:37:06.915 ************************************ 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:07.176 ************************************ 00:37:07.176 START TEST nvmf_host_multipath 00:37:07.176 ************************************ 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:37:07.176 * Looking for test storage... 00:37:07.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:07.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.176 --rc genhtml_branch_coverage=1 00:37:07.176 --rc genhtml_function_coverage=1 00:37:07.176 --rc genhtml_legend=1 00:37:07.176 --rc geninfo_all_blocks=1 00:37:07.176 --rc geninfo_unexecuted_blocks=1 00:37:07.176 00:37:07.176 ' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:07.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.176 --rc genhtml_branch_coverage=1 00:37:07.176 --rc genhtml_function_coverage=1 00:37:07.176 --rc genhtml_legend=1 00:37:07.176 --rc geninfo_all_blocks=1 00:37:07.176 --rc geninfo_unexecuted_blocks=1 00:37:07.176 00:37:07.176 ' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:07.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.176 --rc genhtml_branch_coverage=1 00:37:07.176 --rc genhtml_function_coverage=1 00:37:07.176 --rc genhtml_legend=1 00:37:07.176 --rc geninfo_all_blocks=1 00:37:07.176 --rc geninfo_unexecuted_blocks=1 00:37:07.176 00:37:07.176 ' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:07.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.176 --rc genhtml_branch_coverage=1 00:37:07.176 --rc genhtml_function_coverage=1 00:37:07.176 --rc genhtml_legend=1 00:37:07.176 --rc geninfo_all_blocks=1 00:37:07.176 --rc geninfo_unexecuted_blocks=1 00:37:07.176 00:37:07.176 ' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.176 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@50 -- # : 0 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:37:07.177 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:07.177 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # return 0 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:37:07.437 10.0.0.1 00:37:07.437 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:37:07.438 10.0.0.2 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:37:07.438 11:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:37:07.438 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:37:07.698 10.0.0.3 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:37:07.698 10.0.0.4 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:07.698 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:07.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:07.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:37:07.699 00:37:07.699 --- 10.0.0.1 ping statistics --- 00:37:07.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.699 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:07.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:07.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:37:07.699 00:37:07.699 --- 10.0.0.2 ping statistics --- 00:37:07.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.699 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:37:07.699 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:07.699 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:37:07.699 00:37:07.699 --- 10.0.0.3 ping statistics --- 00:37:07.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.699 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:37:07.699 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:07.699 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:37:07.699 00:37:07.699 --- 10.0.0.4 ping statistics --- 00:37:07.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.699 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@281 -- # return 0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:07.699 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:37:07.700 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:37:07.959 ' 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@328 -- # nvmfpid=96357 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@329 -- # waitforlisten 96357 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96357 ']' 00:37:07.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:07.959 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:07.959 [2024-12-05 11:20:32.448626] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:07.959 [2024-12-05 11:20:32.448727] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.959 [2024-12-05 11:20:32.606771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:08.217 [2024-12-05 11:20:32.689388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:08.217 [2024-12-05 11:20:32.689766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:08.217 [2024-12-05 11:20:32.689803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:08.217 [2024-12-05 11:20:32.689822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:08.217 [2024-12-05 11:20:32.689839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:08.217 [2024-12-05 11:20:32.691246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.217 [2024-12-05 11:20:32.691261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96357 00:37:08.217 11:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:08.785 [2024-12-05 11:20:33.132298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.785 11:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:08.785 Malloc0 00:37:08.785 11:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:37:09.043 11:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:09.315 11:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.572 [2024-12-05 11:20:34.099434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.572 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:09.830 [2024-12-05 11:20:34.319525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96448 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96448 /var/tmp/bdevperf.sock 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96448 ']' 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:09.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.830 11:20:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:37:10.764 11:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.764 11:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:37:10.764 11:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:37:11.022 11:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:11.280 Nvme0n1 00:37:11.280 11:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:11.850 Nvme0n1 00:37:11.850 11:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:37:11.850 11:20:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:37:12.784 11:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:37:12.784 11:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:13.041 11:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:13.041 11:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:37:13.041 11:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96536 00:37:13.041 11:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:13.041 11:20:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96357 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:19.605 Attaching 4 probes... 00:37:19.605 @path[10.0.0.2, 4421]: 21481 00:37:19.605 @path[10.0.0.2, 4421]: 21466 00:37:19.605 @path[10.0.0.2, 4421]: 21327 00:37:19.605 @path[10.0.0.2, 4421]: 21432 00:37:19.605 @path[10.0.0.2, 4421]: 21435 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96536 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:37:19.605 11:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:19.605 11:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:20.173 11:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:37:20.174 11:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96670 00:37:20.174 11:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96357 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:20.174 11:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:26.751 Attaching 4 probes... 00:37:26.751 @path[10.0.0.2, 4420]: 21658 00:37:26.751 @path[10.0.0.2, 4420]: 21968 00:37:26.751 @path[10.0.0.2, 4420]: 21971 00:37:26.751 @path[10.0.0.2, 4420]: 22158 00:37:26.751 @path[10.0.0.2, 4420]: 21858 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96670 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:37:26.751 11:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:26.751 11:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:27.011 11:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:37:27.011 11:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96357 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:27.011 11:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96801 00:37:27.011 11:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:33.594 Attaching 4 probes... 00:37:33.594 @path[10.0.0.2, 4421]: 18224 00:37:33.594 @path[10.0.0.2, 4421]: 21303 00:37:33.594 @path[10.0.0.2, 4421]: 21508 00:37:33.594 @path[10.0.0.2, 4421]: 21637 00:37:33.594 @path[10.0.0.2, 4421]: 21561 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96801 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:37:33.594 11:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:33.594 11:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:33.594 11:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:37:33.594 11:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96357 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:33.594 11:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96936 00:37:33.594 11:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:40.161 Attaching 4 probes... 00:37:40.161 00:37:40.161 00:37:40.161 00:37:40.161 00:37:40.161 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96936 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:40.161 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:40.420 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:37:40.420 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97069 00:37:40.420 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96357 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:40.420 11:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:46.996 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:46.996 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:37:46.996 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:37:46.996 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:46.996 Attaching 4 probes... 00:37:46.996 @path[10.0.0.2, 4421]: 20633 00:37:46.996 @path[10.0.0.2, 4421]: 21515 00:37:46.996 @path[10.0.0.2, 4421]: 21351 00:37:46.996 @path[10.0.0.2, 4421]: 21312 00:37:46.996 @path[10.0.0.2, 4421]: 21268 00:37:46.996 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97069 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:46.997 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:46.997 [2024-12-05 11:21:11.633277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:46.997 [2024-12-05 11:21:11.633545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1053e90 is same with the state(6) to be set 00:37:47.256 11:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:37:48.259 11:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:37:48.259 11:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96357 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:37:48.259 11:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97203 00:37:48.259 11:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:54.822 Attaching 4 probes... 00:37:54.822 @path[10.0.0.2, 4420]: 21343 00:37:54.822 @path[10.0.0.2, 4420]: 21591 00:37:54.822 @path[10.0.0.2, 4420]: 21627 00:37:54.822 @path[10.0.0.2, 4420]: 21818 00:37:54.822 @path[10.0.0.2, 4420]: 21540 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97203 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:37:54.822 11:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:54.822 [2024-12-05 11:21:19.182596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:54.822 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:54.822 11:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:38:01.394 11:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:38:01.394 11:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96357 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:38:01.394 11:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97396 00:38:01.394 11:21:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:07.958 Attaching 4 probes... 00:38:07.958 @path[10.0.0.2, 4421]: 19286 00:38:07.958 @path[10.0.0.2, 4421]: 19196 00:38:07.958 @path[10.0.0.2, 4421]: 19260 00:38:07.958 @path[10.0.0.2, 4421]: 19187 00:38:07.958 @path[10.0.0.2, 4421]: 19101 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97396 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96448 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96448 ']' 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96448 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96448 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:07.958 killing process with pid 96448 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96448' 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96448 00:38:07.958 11:21:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96448 00:38:07.958 { 00:38:07.958 "results": [ 00:38:07.958 { 00:38:07.958 "job": "Nvme0n1", 00:38:07.958 "core_mask": "0x4", 00:38:07.958 "workload": "verify", 00:38:07.958 "status": "terminated", 00:38:07.958 "verify_range": { 00:38:07.958 "start": 0, 00:38:07.958 "length": 16384 00:38:07.958 }, 00:38:07.958 "queue_depth": 128, 00:38:07.958 "io_size": 4096, 00:38:07.958 "runtime": 55.4396, 00:38:07.958 "iops": 9025.281567688078, 00:38:07.958 "mibps": 35.255006123781556, 00:38:07.958 "io_failed": 0, 00:38:07.958 "io_timeout": 0, 00:38:07.958 "avg_latency_us": 14158.829243412192, 00:38:07.958 "min_latency_us": 721.6761904761905, 00:38:07.958 "max_latency_us": 7030452.419047619 00:38:07.958 } 00:38:07.958 ], 00:38:07.958 "core_count": 1 00:38:07.958 } 00:38:07.958 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96448 00:38:07.958 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:07.958 [2024-12-05 11:20:34.384996] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:07.958 [2024-12-05 11:20:34.385098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96448 ] 00:38:07.958 [2024-12-05 11:20:34.524781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.958 [2024-12-05 11:20:34.577966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:07.958 Running I/O for 90 seconds... 00:38:07.959 11576.00 IOPS, 45.22 MiB/s [2024-12-05T11:21:32.611Z] 11572.50 IOPS, 45.21 MiB/s [2024-12-05T11:21:32.611Z] 11343.67 IOPS, 44.31 MiB/s [2024-12-05T11:21:32.611Z] 11202.75 IOPS, 43.76 MiB/s [2024-12-05T11:21:32.611Z] 11093.20 IOPS, 43.33 MiB/s [2024-12-05T11:21:32.611Z] 11033.33 IOPS, 43.10 MiB/s [2024-12-05T11:21:32.611Z] 10989.00 IOPS, 42.93 MiB/s [2024-12-05T11:21:32.611Z] 10971.00 IOPS, 42.86 MiB/s [2024-12-05T11:21:32.611Z] [2024-12-05 11:20:44.528661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.528766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.528848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.528869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.528893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.528910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.528933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.528950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.528973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.528989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.529743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.529760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.530962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.530991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.531015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.531032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.531054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.959 [2024-12-05 11:20:44.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:07.959 [2024-12-05 11:20:44.531093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.531973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.531995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.960 [2024-12-05 11:20:44.532614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:07.960 [2024-12-05 11:20:44.532635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.532650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.532671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.532688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.533978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.533997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:07.961 [2024-12-05 11:20:44.534870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.961 [2024-12-05 11:20:44.534886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.534907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.534924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.534947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.534963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.534984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:44.535380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:44.535396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:07.962 10938.22 IOPS, 42.73 MiB/s [2024-12-05T11:21:32.614Z] 10946.80 IOPS, 42.76 MiB/s [2024-12-05T11:21:32.614Z] 10950.55 IOPS, 42.78 MiB/s [2024-12-05T11:21:32.614Z] 10959.83 IOPS, 42.81 MiB/s [2024-12-05T11:21:32.614Z] 10963.77 IOPS, 42.83 MiB/s [2024-12-05T11:21:32.614Z] 10962.29 IOPS, 42.82 MiB/s [2024-12-05T11:21:32.614Z] [2024-12-05 11:20:51.158806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.158906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.158977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.158997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.962 [2024-12-05 11:20:51.159385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:07.962 [2024-12-05 11:20:51.159913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.962 [2024-12-05 11:20:51.159930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.159952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.159970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.159992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.160925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.160943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.161781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.161804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.161832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.161849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.161876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.161893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.161919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.161936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.161962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.161979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:07.963 [2024-12-05 11:20:51.162358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.963 [2024-12-05 11:20:51.162374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.162973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.162990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.163960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.163985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.164002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.164035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.964 [2024-12-05 11:20:51.164052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:07.964 [2024-12-05 11:20:51.164078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.965 [2024-12-05 11:20:51.164096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:51.164254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.965 [2024-12-05 11:20:51.164273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:07.965 10832.73 IOPS, 42.32 MiB/s [2024-12-05T11:21:32.617Z] 10273.56 IOPS, 40.13 MiB/s [2024-12-05T11:21:32.617Z] 10300.35 IOPS, 40.24 MiB/s [2024-12-05T11:21:32.617Z] 10319.44 IOPS, 40.31 MiB/s [2024-12-05T11:21:32.617Z] 10339.05 IOPS, 40.39 MiB/s [2024-12-05T11:21:32.617Z] 10366.25 IOPS, 40.49 MiB/s [2024-12-05T11:21:32.617Z] 10383.48 IOPS, 40.56 MiB/s [2024-12-05T11:21:32.617Z] [2024-12-05 11:20:58.210075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.210760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.210919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.210985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.211062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.211149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.211238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.211327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.211409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.211475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.211545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.211638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.211706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.211778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.211844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.211905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.211976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.212084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.212162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.212228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.212303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.212368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.212442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.212504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.212561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.212646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.212728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.212794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.212880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.212935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.213017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.213070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.213140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.213205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.213279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.213343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.213694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.213824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.213910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.213991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.214077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.214139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.214211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.214280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.214347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.214399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.214476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.214549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.214654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.214723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.214795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.214852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.214935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.214993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.215130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.215263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.215387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.215518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.215664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.215796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.215926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.216093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.216157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.965 [2024-12-05 11:20:58.216219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:07.965 [2024-12-05 11:20:58.216279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.216353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.216412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.216479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.216549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.216637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.216735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.216792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.216857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.216917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.216984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.217053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.217146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.217217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.217275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.217332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.217398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.217469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.217538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.217610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.217680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.217748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.217839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.217903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.217982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.218129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.218258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.218378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.218522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.218660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.218803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.218931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.218988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.219102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.219237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.219365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.219492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.219639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.219755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.219875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.219938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.220133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.220264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.220381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.220499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.220649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.220797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.220926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.220984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.221053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.221104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.221169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.221238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.221303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.221361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:07.966 [2024-12-05 11:20:58.221418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.966 [2024-12-05 11:20:58.221476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.221539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.221614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.221705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.221771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.221840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.221913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.221982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.222055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.222138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.222201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.222265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.222323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.222379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.222438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.222495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.222544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.222639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.222704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.222762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.222845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.222913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.222969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.223098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.223224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.223355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.223478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.223621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.223760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.223904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.223961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.224019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.224122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.224194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.224258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.224340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.224413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.224480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.224558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.224637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.224718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.224772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.224836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.224893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.224962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.225024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.225088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.225151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.225490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.967 [2024-12-05 11:20:58.225614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.225701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.225755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.225831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.225893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.225963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.226022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.226104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.226160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.226232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.226295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.226372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.967 [2024-12-05 11:20:58.226429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:07.967 [2024-12-05 11:20:58.226502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:20:58.226553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.226655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:20:58.226713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.226773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:20:58.226834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.226904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.226975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.227101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.227219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.227359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.227489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.227633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.227767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.227892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.227953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.228015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.228124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.228177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.228238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.228296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.228365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.228424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.228495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.228565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.228649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.228705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:20:58.228782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:20:58.228838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:07.968 10335.77 IOPS, 40.37 MiB/s [2024-12-05T11:21:32.620Z] 9886.39 IOPS, 38.62 MiB/s [2024-12-05T11:21:32.620Z] 9474.46 IOPS, 37.01 MiB/s [2024-12-05T11:21:32.620Z] 9095.48 IOPS, 35.53 MiB/s [2024-12-05T11:21:32.620Z] 8745.65 IOPS, 34.16 MiB/s [2024-12-05T11:21:32.620Z] 8421.74 IOPS, 32.90 MiB/s [2024-12-05T11:21:32.620Z] 8120.96 IOPS, 31.72 MiB/s [2024-12-05T11:21:32.620Z] 7884.38 IOPS, 30.80 MiB/s [2024-12-05T11:21:32.620Z] 7964.23 IOPS, 31.11 MiB/s [2024-12-05T11:21:32.620Z] 8052.84 IOPS, 31.46 MiB/s [2024-12-05T11:21:32.620Z] 8136.03 IOPS, 31.78 MiB/s [2024-12-05T11:21:32.620Z] 8211.39 IOPS, 32.08 MiB/s [2024-12-05T11:21:32.620Z] 8283.15 IOPS, 32.36 MiB/s [2024-12-05T11:21:32.620Z] 8350.60 IOPS, 32.62 MiB/s [2024-12-05T11:21:32.620Z] [2024-12-05 11:21:11.633711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.968 [2024-12-05 11:21:11.633795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.633814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.968 [2024-12-05 11:21:11.633830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.633846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.968 [2024-12-05 11:21:11.633861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.633876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.968 [2024-12-05 11:21:11.633891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.633906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c1a0 is same with the state(6) to be set 00:38:07.968 [2024-12-05 11:21:11.633984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:21:11.634002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:21:11.634049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:21:11.634083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:21:11.634116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:21:11.634186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.968 [2024-12-05 11:21:11.634220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.968 [2024-12-05 11:21:11.634509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.968 [2024-12-05 11:21:11.634525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.634975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.634993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.969 [2024-12-05 11:21:11.635870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.969 [2024-12-05 11:21:11.635885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.635908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.635924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.635940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.635955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.635972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.635987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.970 [2024-12-05 11:21:11.636453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.636975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.636993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.970 [2024-12-05 11:21:11.637283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.970 [2024-12-05 11:21:11.637301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.637317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.637349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.637383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.971 [2024-12-05 11:21:11.637874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.637908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.637955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.637971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.637987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.971 [2024-12-05 11:21:11.638386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.638426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:07.971 [2024-12-05 11:21:11.638440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:07.971 [2024-12-05 11:21:11.638452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3832 len:8 PRP1 0x0 PRP2 0x0 00:38:07.971 [2024-12-05 11:21:11.638468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.971 [2024-12-05 11:21:11.639949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.971 [2024-12-05 11:21:11.640002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c1a0 (9): Bad file descriptor 00:38:07.971 [2024-12-05 11:21:11.640188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.971 [2024-12-05 11:21:11.640215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c1a0 with addr=10.0.0.2, port=4421 00:38:07.971 [2024-12-05 11:21:11.640234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c1a0 is same with the state(6) to be set 00:38:07.971 [2024-12-05 11:21:11.640277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c1a0 (9): Bad file descriptor 00:38:07.972 [2024-12-05 11:21:11.640302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.972 [2024-12-05 11:21:11.640319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.972 [2024-12-05 11:21:11.640341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.972 [2024-12-05 11:21:11.640359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.972 [2024-12-05 11:21:11.640376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.972 8416.33 IOPS, 32.88 MiB/s [2024-12-05T11:21:32.624Z] 8473.14 IOPS, 33.10 MiB/s [2024-12-05T11:21:32.624Z] 8535.89 IOPS, 33.34 MiB/s [2024-12-05T11:21:32.624Z] 8593.79 IOPS, 33.57 MiB/s [2024-12-05T11:21:32.624Z] 8649.62 IOPS, 33.79 MiB/s [2024-12-05T11:21:32.624Z] 8705.05 IOPS, 34.00 MiB/s [2024-12-05T11:21:32.624Z] 8753.57 IOPS, 34.19 MiB/s [2024-12-05T11:21:32.624Z] 8798.74 IOPS, 34.37 MiB/s [2024-12-05T11:21:32.624Z] 8828.91 IOPS, 34.49 MiB/s [2024-12-05T11:21:32.624Z] 8855.69 IOPS, 34.59 MiB/s [2024-12-05T11:21:32.624Z] [2024-12-05 11:21:21.707417] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:38:07.972 8881.20 IOPS, 34.69 MiB/s [2024-12-05T11:21:32.624Z] 8902.87 IOPS, 34.78 MiB/s [2024-12-05T11:21:32.624Z] 8924.85 IOPS, 34.86 MiB/s [2024-12-05T11:21:32.624Z] 8944.76 IOPS, 34.94 MiB/s [2024-12-05T11:21:32.624Z] 8961.22 IOPS, 35.00 MiB/s [2024-12-05T11:21:32.624Z] 8976.25 IOPS, 35.06 MiB/s [2024-12-05T11:21:32.624Z] 8987.60 IOPS, 35.11 MiB/s [2024-12-05T11:21:32.624Z] 9001.55 IOPS, 35.16 MiB/s [2024-12-05T11:21:32.624Z] 9010.44 IOPS, 35.20 MiB/s [2024-12-05T11:21:32.624Z] 9021.16 IOPS, 35.24 MiB/s [2024-12-05T11:21:32.624Z] Received shutdown signal, test time was about 55.440288 seconds 00:38:07.972 00:38:07.972 Latency(us) 00:38:07.972 [2024-12-05T11:21:32.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.972 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:07.972 Verification LBA range: start 0x0 length 0x4000 00:38:07.972 Nvme0n1 : 55.44 9025.28 35.26 0.00 0.00 14158.83 721.68 7030452.42 00:38:07.972 [2024-12-05T11:21:32.624Z] =================================================================================================================== 00:38:07.972 [2024-12-05T11:21:32.624Z] Total : 9025.28 35.26 0.00 0.00 14158.83 721.68 7030452.42 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@99 -- # sync 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@102 -- # set +e 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:07.972 rmmod nvme_tcp 00:38:07.972 rmmod nvme_fabrics 00:38:07.972 rmmod nvme_keyring 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@106 -- # set -e 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@107 -- # return 0 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@336 -- # '[' -n 96357 ']' 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@337 -- # killprocess 96357 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96357 ']' 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96357 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96357 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:07.972 killing process with pid 96357 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96357' 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96357 00:38:07.972 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96357 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@254 -- # local dev 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:38:08.232 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@274 -- # iptr 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-save 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:08.233 00:38:08.233 real 1m1.213s 00:38:08.233 user 2m47.404s 00:38:08.233 sys 0m19.306s 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:08.233 ************************************ 00:38:08.233 END TEST nvmf_host_multipath 00:38:08.233 ************************************ 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:08.233 ************************************ 00:38:08.233 START TEST nvmf_timeout 00:38:08.233 ************************************ 00:38:08.233 11:21:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:38:08.494 * Looking for test storage... 00:38:08.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:08.494 11:21:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:08.494 11:21:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:08.494 11:21:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:08.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.494 --rc genhtml_branch_coverage=1 00:38:08.494 --rc genhtml_function_coverage=1 00:38:08.494 --rc genhtml_legend=1 00:38:08.494 --rc geninfo_all_blocks=1 00:38:08.494 --rc geninfo_unexecuted_blocks=1 00:38:08.494 00:38:08.494 ' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:08.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.494 --rc genhtml_branch_coverage=1 00:38:08.494 --rc genhtml_function_coverage=1 00:38:08.494 --rc genhtml_legend=1 00:38:08.494 --rc geninfo_all_blocks=1 00:38:08.494 --rc geninfo_unexecuted_blocks=1 00:38:08.494 00:38:08.494 ' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:08.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.494 --rc genhtml_branch_coverage=1 00:38:08.494 --rc genhtml_function_coverage=1 00:38:08.494 --rc genhtml_legend=1 00:38:08.494 --rc geninfo_all_blocks=1 00:38:08.494 --rc geninfo_unexecuted_blocks=1 00:38:08.494 00:38:08.494 ' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:08.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.494 --rc genhtml_branch_coverage=1 00:38:08.494 --rc genhtml_function_coverage=1 00:38:08.494 --rc genhtml_legend=1 00:38:08.494 --rc geninfo_all_blocks=1 00:38:08.494 --rc geninfo_unexecuted_blocks=1 00:38:08.494 00:38:08.494 ' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@50 -- # : 0 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:08.494 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:38:08.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@260 -- # remove_target_ns 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@280 -- # nvmf_veth_init 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@223 -- # create_target_ns 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@224 -- # create_main_bridge 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@105 -- # delete_main_bridge 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # return 0 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:08.495 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@28 -- # local -g _dev 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:38:08.755 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0 up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772161 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:38:08.756 10.0.0.1 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772162 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:38:08.756 10.0.0.2 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target0_br 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:08.756 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator1 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target1 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1 up 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target1_br 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target1 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772163 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:38:08.757 10.0.0.3 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772164 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:38:08.757 10.0.0.4 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator1 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:38:08.757 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target1_br 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@38 -- # ping_ips 2 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:09.020 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:09.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:09.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:38:09.021 00:38:09.021 --- 10.0.0.1 ping statistics --- 00:38:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.021 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:09.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:09.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:38:09.021 00:38:09.021 --- 10.0.0.2 ping statistics --- 00:38:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.021 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:38:09.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:09.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:38:09.021 00:38:09.021 --- 10.0.0.3 ping statistics --- 00:38:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.021 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:38:09.021 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:09.021 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:38:09.021 00:38:09.021 --- 10.0.0.4 ping statistics --- 00:38:09.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.021 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@281 -- # return 0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:38:09.021 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:09.022 ' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@328 -- # nvmfpid=97771 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@329 -- # waitforlisten 97771 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97771 ']' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.022 11:21:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:09.287 [2024-12-05 11:21:33.734184] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:09.287 [2024-12-05 11:21:33.734300] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.287 [2024-12-05 11:21:33.881541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:09.287 [2024-12-05 11:21:33.926108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.287 [2024-12-05 11:21:33.926174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.287 [2024-12-05 11:21:33.926184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.287 [2024-12-05 11:21:33.926192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.287 [2024-12-05 11:21:33.926199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.287 [2024-12-05 11:21:33.927080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.287 [2024-12-05 11:21:33.927090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:10.225 11:21:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:10.483 [2024-12-05 11:21:34.987061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:10.483 11:21:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:38:10.740 Malloc0 00:38:10.740 11:21:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:10.998 11:21:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:11.255 11:21:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:11.513 [2024-12-05 11:21:36.129479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97863 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97863 /var/tmp/bdevperf.sock 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97863 ']' 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.513 11:21:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:11.771 [2024-12-05 11:21:36.219159] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:11.771 [2024-12-05 11:21:36.219287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97863 ] 00:38:11.771 [2024-12-05 11:21:36.370177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.029 [2024-12-05 11:21:36.450229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:12.619 11:21:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.619 11:21:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:38:12.619 11:21:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:38:12.876 11:21:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:38:13.133 NVMe0n1 00:38:13.133 11:21:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97906 00:38:13.133 11:21:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:13.133 11:21:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:38:13.391 Running I/O for 10 seconds... 00:38:14.328 11:21:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:14.328 10504.00 IOPS, 41.03 MiB/s [2024-12-05T11:21:38.981Z] [2024-12-05 11:21:38.896492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.896839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c510 is same with the state(6) to be set 00:38:14.329 [2024-12-05 11:21:38.897120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:14.329 [2024-12-05 11:21:38.897444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.329 [2024-12-05 11:21:38.897465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.329 [2024-12-05 11:21:38.897486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.329 [2024-12-05 11:21:38.897507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.329 [2024-12-05 11:21:38.897532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.329 [2024-12-05 11:21:38.897553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.329 [2024-12-05 11:21:38.897576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.329 [2024-12-05 11:21:38.897600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.329 [2024-12-05 11:21:38.897613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.897981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.897993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.330 [2024-12-05 11:21:38.898480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.330 [2024-12-05 11:21:38.898491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:14.331 [2024-12-05 11:21:38.898914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.898965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91072 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.898975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.898989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.898998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91080 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91088 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91096 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91104 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91112 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91120 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91128 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91136 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91144 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91152 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91160 len:8 PRP1 0x0 PRP2 0x0 00:38:14.331 [2024-12-05 11:21:38.899380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.331 [2024-12-05 11:21:38.899389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.331 [2024-12-05 11:21:38.899397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.331 [2024-12-05 11:21:38.899405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91168 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91176 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91184 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91192 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91200 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91208 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91216 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91224 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91232 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91240 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91248 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91256 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91264 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91272 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91280 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91288 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.899967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.899977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.899985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.899993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91296 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.900030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.900038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91304 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.900067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.900075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91312 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.900102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.900111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91320 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.900137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.900147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91328 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.900174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.900183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91336 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.900210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.900218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91344 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.332 [2024-12-05 11:21:38.900245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.332 [2024-12-05 11:21:38.900254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91352 len:8 PRP1 0x0 PRP2 0x0 00:38:14.332 [2024-12-05 11:21:38.900263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.332 [2024-12-05 11:21:38.900273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91360 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91368 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91376 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91384 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91392 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91400 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91408 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91416 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91424 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90512 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90520 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90528 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90536 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.900749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.900759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:14.333 [2024-12-05 11:21:38.900767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:14.333 [2024-12-05 11:21:38.900776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90544 len:8 PRP1 0x0 PRP2 0x0 00:38:14.333 [2024-12-05 11:21:38.908054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.908282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.333 [2024-12-05 11:21:38.908298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.908311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.333 [2024-12-05 11:21:38.908322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.908335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.333 [2024-12-05 11:21:38.908345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.908356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:14.333 [2024-12-05 11:21:38.908367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:14.333 [2024-12-05 11:21:38.908378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364f50 is same with the state(6) to be set 00:38:14.333 [2024-12-05 11:21:38.908555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:14.333 [2024-12-05 11:21:38.908578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1364f50 (9): Bad file descriptor 00:38:14.333 [2024-12-05 11:21:38.908712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.333 [2024-12-05 11:21:38.908740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1364f50 with addr=10.0.0.2, port=4420 00:38:14.333 [2024-12-05 11:21:38.908752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364f50 is same with the state(6) to be set 00:38:14.333 [2024-12-05 11:21:38.908769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1364f50 (9): Bad file descriptor 00:38:14.333 [2024-12-05 11:21:38.908786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:14.333 [2024-12-05 11:21:38.908798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:14.333 [2024-12-05 11:21:38.908811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:14.333 [2024-12-05 11:21:38.908822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:38:14.333 [2024-12-05 11:21:38.908835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:14.334 11:21:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:38:16.206 5650.50 IOPS, 22.07 MiB/s [2024-12-05T11:21:41.117Z] 3767.00 IOPS, 14.71 MiB/s [2024-12-05T11:21:41.117Z] [2024-12-05 11:21:40.909135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.465 [2024-12-05 11:21:40.909206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1364f50 with addr=10.0.0.2, port=4420 00:38:16.465 [2024-12-05 11:21:40.909224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364f50 is same with the state(6) to be set 00:38:16.465 [2024-12-05 11:21:40.909256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1364f50 (9): Bad file descriptor 00:38:16.465 [2024-12-05 11:21:40.909278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:16.465 [2024-12-05 11:21:40.909290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:16.465 [2024-12-05 11:21:40.909306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:16.465 [2024-12-05 11:21:40.909321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:38:16.465 [2024-12-05 11:21:40.909334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:16.465 11:21:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:38:16.465 11:21:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:16.465 11:21:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:38:16.724 11:21:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:38:16.724 11:21:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:38:16.724 11:21:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:38:16.724 11:21:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:38:16.983 11:21:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:38:16.983 11:21:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:38:18.205 2825.25 IOPS, 11.04 MiB/s [2024-12-05T11:21:43.115Z] 2260.20 IOPS, 8.83 MiB/s [2024-12-05T11:21:43.115Z] [2024-12-05 11:21:42.909553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-12-05 11:21:42.909652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1364f50 with addr=10.0.0.2, port=4420 00:38:18.463 [2024-12-05 11:21:42.909671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1364f50 is same with the state(6) to be set 00:38:18.463 [2024-12-05 11:21:42.909703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1364f50 (9): Bad file descriptor 00:38:18.463 [2024-12-05 11:21:42.909725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:18.463 [2024-12-05 11:21:42.909737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:18.463 [2024-12-05 11:21:42.909751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:18.464 [2024-12-05 11:21:42.909764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:38:18.464 [2024-12-05 11:21:42.909778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:20.335 1883.50 IOPS, 7.36 MiB/s [2024-12-05T11:21:44.987Z] 1614.43 IOPS, 6.31 MiB/s [2024-12-05T11:21:44.987Z] [2024-12-05 11:21:44.909905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:20.335 [2024-12-05 11:21:44.909986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:20.335 [2024-12-05 11:21:44.909999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:20.335 [2024-12-05 11:21:44.910011] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:38:20.335 [2024-12-05 11:21:44.910026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:38:21.272 1412.62 IOPS, 5.52 MiB/s 00:38:21.272 Latency(us) 00:38:21.272 [2024-12-05T11:21:45.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.272 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:21.272 Verification LBA range: start 0x0 length 0x4000 00:38:21.272 NVMe0n1 : 8.08 1398.11 5.46 15.84 0.00 90483.74 1872.46 7030452.42 00:38:21.272 [2024-12-05T11:21:45.925Z] =================================================================================================================== 00:38:21.273 [2024-12-05T11:21:45.925Z] Total : 1398.11 5.46 15.84 0.00 90483.74 1872.46 7030452.42 00:38:21.273 { 00:38:21.273 "results": [ 00:38:21.273 { 00:38:21.273 "job": "NVMe0n1", 00:38:21.273 "core_mask": "0x4", 00:38:21.273 "workload": "verify", 00:38:21.273 "status": "finished", 00:38:21.273 "verify_range": { 00:38:21.273 "start": 0, 00:38:21.273 "length": 16384 00:38:21.273 }, 00:38:21.273 "queue_depth": 128, 00:38:21.273 "io_size": 4096, 00:38:21.273 "runtime": 8.083055, 00:38:21.273 "iops": 1398.1099967772086, 00:38:21.273 "mibps": 5.461367174910971, 00:38:21.273 "io_failed": 128, 00:38:21.273 "io_timeout": 0, 00:38:21.273 "avg_latency_us": 90483.73663054302, 00:38:21.273 "min_latency_us": 1872.4571428571428, 00:38:21.273 "max_latency_us": 7030452.419047619 00:38:21.273 } 00:38:21.273 ], 00:38:21.273 "core_count": 1 00:38:21.273 } 00:38:21.840 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:38:21.840 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:21.840 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:38:22.098 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:38:22.098 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:38:22.098 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:38:22.098 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97906 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97863 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97863 ']' 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97863 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97863 00:38:22.357 killing process with pid 97863 00:38:22.357 Received shutdown signal, test time was about 9.157956 seconds 00:38:22.357 00:38:22.357 Latency(us) 00:38:22.357 [2024-12-05T11:21:47.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.357 [2024-12-05T11:21:47.009Z] =================================================================================================================== 00:38:22.357 [2024-12-05T11:21:47.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97863' 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97863 00:38:22.357 11:21:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97863 00:38:22.616 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:23.185 [2024-12-05 11:21:47.530753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98070 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98070 /var/tmp/bdevperf.sock 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98070 ']' 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.185 11:21:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:23.185 [2024-12-05 11:21:47.610428] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:23.185 [2024-12-05 11:21:47.610534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98070 ] 00:38:23.185 [2024-12-05 11:21:47.758727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.185 [2024-12-05 11:21:47.831766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:24.123 11:21:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:24.123 11:21:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:38:24.123 11:21:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:38:24.123 11:21:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:38:24.691 NVMe0n1 00:38:24.691 11:21:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98118 00:38:24.691 11:21:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:38:24.691 11:21:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:24.691 Running I/O for 10 seconds... 00:38:25.628 11:21:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:25.891 11114.00 IOPS, 43.41 MiB/s [2024-12-05T11:21:50.543Z] [2024-12-05 11:21:50.362408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.891 [2024-12-05 11:21:50.362676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.362996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.892 [2024-12-05 11:21:50.363410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b46b0 is same with the state(6) to be set 00:38:25.893 [2024-12-05 11:21:50.363831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.363888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.363913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.363926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.363939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.363951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.363963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.363974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.363986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.363996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.893 [2024-12-05 11:21:50.364616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.893 [2024-12-05 11:21:50.364628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.364984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.894 [2024-12-05 11:21:50.364994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.894 [2024-12-05 11:21:50.365489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.894 [2024-12-05 11:21:50.365499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.365988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.365998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:25.895 [2024-12-05 11:21:50.366202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.895 [2024-12-05 11:21:50.366393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.895 [2024-12-05 11:21:50.366403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:25.896 [2024-12-05 11:21:50.366703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:25.896 [2024-12-05 11:21:50.366762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104456 len:8 PRP1 0x0 PRP2 0x0 00:38:25.896 [2024-12-05 11:21:50.366773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:25.896 [2024-12-05 11:21:50.366797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:25.896 [2024-12-05 11:21:50.366806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104464 len:8 PRP1 0x0 PRP2 0x0 00:38:25.896 [2024-12-05 11:21:50.366816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:25.896 [2024-12-05 11:21:50.366836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:25.896 [2024-12-05 11:21:50.366844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104472 len:8 PRP1 0x0 PRP2 0x0 00:38:25.896 [2024-12-05 11:21:50.366856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.366867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:25.896 [2024-12-05 11:21:50.366875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:25.896 [2024-12-05 11:21:50.366885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104480 len:8 PRP1 0x0 PRP2 0x0 00:38:25.896 [2024-12-05 11:21:50.366895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:25.896 [2024-12-05 11:21:50.367203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:25.896 [2024-12-05 11:21:50.367297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:25.896 [2024-12-05 11:21:50.367418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.896 [2024-12-05 11:21:50.367447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025f50 with addr=10.0.0.2, port=4420 00:38:25.896 [2024-12-05 11:21:50.367459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025f50 is same with the state(6) to be set 00:38:25.896 [2024-12-05 11:21:50.367478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:25.896 [2024-12-05 11:21:50.367495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:25.896 [2024-12-05 11:21:50.367506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:25.896 [2024-12-05 11:21:50.367519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:25.896 [2024-12-05 11:21:50.367531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:25.896 [2024-12-05 11:21:50.367543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:25.896 11:21:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:38:26.834 6493.50 IOPS, 25.37 MiB/s [2024-12-05T11:21:51.486Z] [2024-12-05 11:21:51.367729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.834 [2024-12-05 11:21:51.367800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025f50 with addr=10.0.0.2, port=4420 00:38:26.834 [2024-12-05 11:21:51.367817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025f50 is same with the state(6) to be set 00:38:26.834 [2024-12-05 11:21:51.367845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:26.834 [2024-12-05 11:21:51.367866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:26.834 [2024-12-05 11:21:51.367877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:26.834 [2024-12-05 11:21:51.367891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:26.834 [2024-12-05 11:21:51.367905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:26.834 [2024-12-05 11:21:51.367920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:26.834 11:21:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:27.093 [2024-12-05 11:21:51.651601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:27.093 11:21:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 98118 00:38:27.919 4329.00 IOPS, 16.91 MiB/s [2024-12-05T11:21:52.571Z] [2024-12-05 11:21:52.382362] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:38:29.838 3246.75 IOPS, 12.68 MiB/s [2024-12-05T11:21:55.424Z] 4461.20 IOPS, 17.43 MiB/s [2024-12-05T11:21:56.357Z] 5652.33 IOPS, 22.08 MiB/s [2024-12-05T11:21:57.344Z] 6466.00 IOPS, 25.26 MiB/s [2024-12-05T11:21:58.280Z] 7055.38 IOPS, 27.56 MiB/s [2024-12-05T11:21:59.217Z] 7557.67 IOPS, 29.52 MiB/s [2024-12-05T11:21:59.217Z] 7964.00 IOPS, 31.11 MiB/s 00:38:34.565 Latency(us) 00:38:34.565 [2024-12-05T11:21:59.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.565 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:34.565 Verification LBA range: start 0x0 length 0x4000 00:38:34.565 NVMe0n1 : 10.01 7966.07 31.12 0.00 0.00 16039.20 1646.20 3019898.88 00:38:34.565 [2024-12-05T11:21:59.217Z] =================================================================================================================== 00:38:34.565 [2024-12-05T11:21:59.217Z] Total : 7966.07 31.12 0.00 0.00 16039.20 1646.20 3019898.88 00:38:34.565 { 00:38:34.565 "results": [ 00:38:34.565 { 00:38:34.565 "job": "NVMe0n1", 00:38:34.565 "core_mask": "0x4", 00:38:34.565 "workload": "verify", 00:38:34.565 "status": "finished", 00:38:34.565 "verify_range": { 00:38:34.565 "start": 0, 00:38:34.565 "length": 16384 00:38:34.565 }, 00:38:34.565 "queue_depth": 128, 00:38:34.565 "io_size": 4096, 00:38:34.565 "runtime": 10.009199, 00:38:34.565 "iops": 7966.072010357672, 00:38:34.565 "mibps": 31.117468790459657, 00:38:34.565 "io_failed": 0, 00:38:34.565 "io_timeout": 0, 00:38:34.565 "avg_latency_us": 16039.19766939359, 00:38:34.565 "min_latency_us": 1646.2019047619049, 00:38:34.565 "max_latency_us": 3019898.88 00:38:34.565 } 00:38:34.565 ], 00:38:34.565 "core_count": 1 00:38:34.565 } 00:38:34.824 11:21:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98229 00:38:34.824 11:21:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:34.824 11:21:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:38:34.824 Running I/O for 10 seconds... 00:38:35.760 11:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:36.022 10548.00 IOPS, 41.20 MiB/s [2024-12-05T11:22:00.674Z] [2024-12-05 11:22:00.435187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b2d40 is same with the state(6) to be set 00:38:36.022 [2024-12-05 11:22:00.435824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.435886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.435912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.435924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.435936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.435948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.435960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.435970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.435981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.435992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.022 [2024-12-05 11:22:00.436225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.022 [2024-12-05 11:22:00.436235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.023 [2024-12-05 11:22:00.436828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.436985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.436995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.437006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.437016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.437027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.437037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.437048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.437057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.437068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.023 [2024-12-05 11:22:00.437078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.023 [2024-12-05 11:22:00.437088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.024 [2024-12-05 11:22:00.437948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.024 [2024-12-05 11:22:00.437958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.437969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.437978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.437989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.437999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:36.025 [2024-12-05 11:22:00.438361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:36.025 [2024-12-05 11:22:00.438676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:36.025 [2024-12-05 11:22:00.438722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:36.025 [2024-12-05 11:22:00.438731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92168 len:8 PRP1 0x0 PRP2 0x0 00:38:36.025 [2024-12-05 11:22:00.438741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:36.025 [2024-12-05 11:22:00.438990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:38:36.025 [2024-12-05 11:22:00.439082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:36.025 [2024-12-05 11:22:00.439205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.025 [2024-12-05 11:22:00.439229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025f50 with addr=10.0.0.2, port=4420 00:38:36.025 [2024-12-05 11:22:00.439242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025f50 is same with the state(6) to be set 00:38:36.025 [2024-12-05 11:22:00.439260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:36.025 [2024-12-05 11:22:00.439277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:38:36.025 [2024-12-05 11:22:00.439288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:38:36.025 [2024-12-05 11:22:00.439300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:38:36.025 [2024-12-05 11:22:00.439313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:38:36.025 [2024-12-05 11:22:00.439325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:38:36.025 11:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:38:36.961 5697.00 IOPS, 22.25 MiB/s [2024-12-05T11:22:01.613Z] [2024-12-05 11:22:01.439514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:36.961 [2024-12-05 11:22:01.439598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025f50 with addr=10.0.0.2, port=4420 00:38:36.961 [2024-12-05 11:22:01.439616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025f50 is same with the state(6) to be set 00:38:36.961 [2024-12-05 11:22:01.439645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:36.961 [2024-12-05 11:22:01.439667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:38:36.961 [2024-12-05 11:22:01.439679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:38:36.961 [2024-12-05 11:22:01.439692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:38:36.961 [2024-12-05 11:22:01.439706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:38:36.961 [2024-12-05 11:22:01.439719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:38:37.896 3798.00 IOPS, 14.84 MiB/s [2024-12-05T11:22:02.548Z] [2024-12-05 11:22:02.439918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:37.896 [2024-12-05 11:22:02.439993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025f50 with addr=10.0.0.2, port=4420 00:38:37.896 [2024-12-05 11:22:02.440010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025f50 is same with the state(6) to be set 00:38:37.896 [2024-12-05 11:22:02.440047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:37.896 [2024-12-05 11:22:02.440069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:38:37.896 [2024-12-05 11:22:02.440081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:38:37.897 [2024-12-05 11:22:02.440095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:38:37.897 [2024-12-05 11:22:02.440109] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:38:37.897 [2024-12-05 11:22:02.440123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:38:38.849 2848.50 IOPS, 11.13 MiB/s [2024-12-05T11:22:03.501Z] [2024-12-05 11:22:03.442870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:38.849 [2024-12-05 11:22:03.442943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1025f50 with addr=10.0.0.2, port=4420 00:38:38.849 [2024-12-05 11:22:03.442960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025f50 is same with the state(6) to be set 00:38:38.849 [2024-12-05 11:22:03.443165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1025f50 (9): Bad file descriptor 00:38:38.849 [2024-12-05 11:22:03.443362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:38:38.849 [2024-12-05 11:22:03.443384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:38:38.849 [2024-12-05 11:22:03.443398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:38:38.849 [2024-12-05 11:22:03.443412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:38:38.849 [2024-12-05 11:22:03.443426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:38:38.849 11:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:39.106 [2024-12-05 11:22:03.705671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.106 11:22:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98229 00:38:40.041 2278.80 IOPS, 8.90 MiB/s [2024-12-05T11:22:04.693Z] [2024-12-05 11:22:04.470192] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:38:41.909 3591.17 IOPS, 14.03 MiB/s [2024-12-05T11:22:07.496Z] 4628.57 IOPS, 18.08 MiB/s [2024-12-05T11:22:08.432Z] 5420.75 IOPS, 21.17 MiB/s [2024-12-05T11:22:09.367Z] 5963.33 IOPS, 23.29 MiB/s [2024-12-05T11:22:09.367Z] 6372.20 IOPS, 24.89 MiB/s 00:38:44.715 Latency(us) 00:38:44.715 [2024-12-05T11:22:09.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.715 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:44.715 Verification LBA range: start 0x0 length 0x4000 00:38:44.715 NVMe0n1 : 10.01 6381.61 24.93 4811.64 0.00 11417.21 534.43 3019898.88 00:38:44.715 [2024-12-05T11:22:09.367Z] =================================================================================================================== 00:38:44.715 [2024-12-05T11:22:09.367Z] Total : 6381.61 24.93 4811.64 0.00 11417.21 0.00 3019898.88 00:38:44.974 { 00:38:44.974 "results": [ 00:38:44.974 { 00:38:44.974 "job": "NVMe0n1", 00:38:44.974 "core_mask": "0x4", 00:38:44.974 "workload": "verify", 00:38:44.974 "status": "finished", 00:38:44.974 "verify_range": { 00:38:44.974 "start": 0, 00:38:44.974 "length": 16384 00:38:44.974 }, 00:38:44.974 "queue_depth": 128, 00:38:44.974 "io_size": 4096, 00:38:44.974 "runtime": 10.005317, 00:38:44.974 "iops": 6381.606899611476, 00:38:44.974 "mibps": 24.92815195160733, 00:38:44.974 "io_failed": 48142, 00:38:44.974 "io_timeout": 0, 00:38:44.974 "avg_latency_us": 11417.20954598798, 00:38:44.974 "min_latency_us": 534.4304761904762, 00:38:44.974 "max_latency_us": 3019898.88 00:38:44.974 } 00:38:44.974 ], 00:38:44.974 "core_count": 1 00:38:44.974 } 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98070 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98070 ']' 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98070 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98070 00:38:44.974 killing process with pid 98070 00:38:44.974 Received shutdown signal, test time was about 10.000000 seconds 00:38:44.974 00:38:44.974 Latency(us) 00:38:44.974 [2024-12-05T11:22:09.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.974 [2024-12-05T11:22:09.626Z] =================================================================================================================== 00:38:44.974 [2024-12-05T11:22:09.626Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98070' 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98070 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98070 00:38:44.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98355 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98355 /var/tmp/bdevperf.sock 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98355 ']' 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:44.974 11:22:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:45.234 [2024-12-05 11:22:09.661835] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:45.234 [2024-12-05 11:22:09.662683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98355 ] 00:38:45.234 [2024-12-05 11:22:09.816504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.234 [2024-12-05 11:22:09.870212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:46.168 11:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:46.168 11:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:38:46.168 11:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98384 00:38:46.168 11:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98355 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:38:46.168 11:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:38:46.426 11:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:38:46.684 NVMe0n1 00:38:46.684 11:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98437 00:38:46.684 11:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:38:46.684 11:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:46.992 Running I/O for 10 seconds... 00:38:47.932 11:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:47.932 19914.00 IOPS, 77.79 MiB/s [2024-12-05T11:22:12.584Z] [2024-12-05 11:22:12.428035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.428709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b58f0 is same with the state(6) to be set 00:38:47.932 [2024-12-05 11:22:12.429140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.932 [2024-12-05 11:22:12.429377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.932 [2024-12-05 11:22:12.429395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.429987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.429996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.933 [2024-12-05 11:22:12.430442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.933 [2024-12-05 11:22:12.430453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.430980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.430993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.934 [2024-12-05 11:22:12.431532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.934 [2024-12-05 11:22:12.431545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.431982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.431997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:47.935 [2024-12-05 11:22:12.432804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.935 [2024-12-05 11:22:12.432818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6e820 is same with the state(6) to be set 00:38:47.935 [2024-12-05 11:22:12.432835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:47.936 [2024-12-05 11:22:12.432846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:47.936 [2024-12-05 11:22:12.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95728 len:8 PRP1 0x0 PRP2 0x0 00:38:47.936 [2024-12-05 11:22:12.432872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.936 [2024-12-05 11:22:12.433060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:47.936 [2024-12-05 11:22:12.433091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.936 [2024-12-05 11:22:12.433106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:47.936 [2024-12-05 11:22:12.433121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.936 [2024-12-05 11:22:12.433145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:47.936 [2024-12-05 11:22:12.433158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.936 [2024-12-05 11:22:12.433170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:47.936 [2024-12-05 11:22:12.433182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:47.936 [2024-12-05 11:22:12.433197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b02f50 is same with the state(6) to be set 00:38:47.936 [2024-12-05 11:22:12.433447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:38:47.936 [2024-12-05 11:22:12.433478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b02f50 (9): Bad file descriptor 00:38:47.936 [2024-12-05 11:22:12.433575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:47.936 [2024-12-05 11:22:12.433615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b02f50 with addr=10.0.0.2, port=4420 00:38:47.936 [2024-12-05 11:22:12.433629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b02f50 is same with the state(6) to be set 00:38:47.936 [2024-12-05 11:22:12.433649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b02f50 (9): Bad file descriptor 00:38:47.936 [2024-12-05 11:22:12.433667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:38:47.936 [2024-12-05 11:22:12.433678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:38:47.936 [2024-12-05 11:22:12.433691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:38:47.936 [2024-12-05 11:22:12.433708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:38:47.936 [2024-12-05 11:22:12.433722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:38:47.936 11:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98437 00:38:49.808 10754.00 IOPS, 42.01 MiB/s [2024-12-05T11:22:14.460Z] 7169.33 IOPS, 28.01 MiB/s [2024-12-05T11:22:14.460Z] [2024-12-05 11:22:14.448921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:49.808 [2024-12-05 11:22:14.448989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b02f50 with addr=10.0.0.2, port=4420 00:38:49.808 [2024-12-05 11:22:14.449004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b02f50 is same with the state(6) to be set 00:38:49.808 [2024-12-05 11:22:14.449031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b02f50 (9): Bad file descriptor 00:38:49.808 [2024-12-05 11:22:14.449060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:38:49.808 [2024-12-05 11:22:14.449072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:38:49.808 [2024-12-05 11:22:14.449084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:38:49.808 [2024-12-05 11:22:14.449097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:38:49.808 [2024-12-05 11:22:14.449108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:38:52.117 5377.00 IOPS, 21.00 MiB/s [2024-12-05T11:22:16.769Z] 4301.60 IOPS, 16.80 MiB/s [2024-12-05T11:22:16.769Z] [2024-12-05 11:22:16.449284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:52.117 [2024-12-05 11:22:16.449349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b02f50 with addr=10.0.0.2, port=4420 00:38:52.117 [2024-12-05 11:22:16.449364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b02f50 is same with the state(6) to be set 00:38:52.117 [2024-12-05 11:22:16.449388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b02f50 (9): Bad file descriptor 00:38:52.117 [2024-12-05 11:22:16.449405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:38:52.117 [2024-12-05 11:22:16.449414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:38:52.117 [2024-12-05 11:22:16.449425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:38:52.117 [2024-12-05 11:22:16.449436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:38:52.117 [2024-12-05 11:22:16.449447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:38:54.050 3584.67 IOPS, 14.00 MiB/s [2024-12-05T11:22:18.702Z] 3072.57 IOPS, 12.00 MiB/s [2024-12-05T11:22:18.702Z] [2024-12-05 11:22:18.449522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:38:54.050 [2024-12-05 11:22:18.449576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:38:54.050 [2024-12-05 11:22:18.449596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:38:54.050 [2024-12-05 11:22:18.449607] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:38:54.050 [2024-12-05 11:22:18.449619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:38:54.984 2688.50 IOPS, 10.50 MiB/s 00:38:54.984 Latency(us) 00:38:54.984 [2024-12-05T11:22:19.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.984 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:38:54.984 NVMe0n1 : 8.08 2660.35 10.39 15.83 0.00 47861.07 1927.07 7030452.42 00:38:54.984 [2024-12-05T11:22:19.636Z] =================================================================================================================== 00:38:54.984 [2024-12-05T11:22:19.636Z] Total : 2660.35 10.39 15.83 0.00 47861.07 1927.07 7030452.42 00:38:54.984 { 00:38:54.984 "results": [ 00:38:54.984 { 00:38:54.984 "job": "NVMe0n1", 00:38:54.984 "core_mask": "0x4", 00:38:54.984 "workload": "randread", 00:38:54.984 "status": "finished", 00:38:54.984 "queue_depth": 128, 00:38:54.984 "io_size": 4096, 00:38:54.984 "runtime": 8.084649, 00:38:54.984 "iops": 2660.3504988280874, 00:38:54.984 "mibps": 10.391994136047217, 00:38:54.984 "io_failed": 128, 00:38:54.984 "io_timeout": 0, 00:38:54.984 "avg_latency_us": 47861.0667497733, 00:38:54.984 "min_latency_us": 1927.0704761904763, 00:38:54.984 "max_latency_us": 7030452.419047619 00:38:54.984 } 00:38:54.984 ], 00:38:54.984 "core_count": 1 00:38:54.984 } 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:54.984 Attaching 5 probes... 00:38:54.984 1147.904498: reset bdev controller NVMe0 00:38:54.984 1147.977140: reconnect bdev controller NVMe0 00:38:54.984 3163.249613: reconnect delay bdev controller NVMe0 00:38:54.984 3163.269985: reconnect bdev controller NVMe0 00:38:54.984 5163.625366: reconnect delay bdev controller NVMe0 00:38:54.984 5163.643726: reconnect bdev controller NVMe0 00:38:54.984 7163.961498: reconnect delay bdev controller NVMe0 00:38:54.984 7163.981389: reconnect bdev controller NVMe0 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98384 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98355 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98355 ']' 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98355 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98355 00:38:54.984 killing process with pid 98355 00:38:54.984 Received shutdown signal, test time was about 8.160752 seconds 00:38:54.984 00:38:54.984 Latency(us) 00:38:54.984 [2024-12-05T11:22:19.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:54.984 [2024-12-05T11:22:19.636Z] =================================================================================================================== 00:38:54.984 [2024-12-05T11:22:19.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98355' 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98355 00:38:54.984 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98355 00:38:55.242 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@99 -- # sync 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@102 -- # set +e 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:55.499 11:22:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:55.499 rmmod nvme_tcp 00:38:55.499 rmmod nvme_fabrics 00:38:55.499 rmmod nvme_keyring 00:38:55.499 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:55.499 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@106 -- # set -e 00:38:55.499 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@107 -- # return 0 00:38:55.499 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@336 -- # '[' -n 97771 ']' 00:38:55.499 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@337 -- # killprocess 97771 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97771 ']' 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97771 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97771 00:38:55.500 killing process with pid 97771 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97771' 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97771 00:38:55.500 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97771 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@342 -- # nvmf_fini 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@254 -- # local dev 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # _dev=0 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # dev_map=() 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@274 -- # iptr 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-save 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-restore 00:38:56.066 00:38:56.066 real 0m47.727s 00:38:56.066 user 2m17.528s 00:38:56.066 sys 0m6.858s 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.066 ************************************ 00:38:56.066 END TEST nvmf_timeout 00:38:56.066 ************************************ 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:56.066 00:38:56.066 real 5m40.891s 00:38:56.066 user 14m16.902s 00:38:56.066 sys 1m23.044s 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.066 11:22:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.066 ************************************ 00:38:56.066 END TEST nvmf_host 00:38:56.066 ************************************ 00:38:56.066 11:22:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:56.066 11:22:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:56.066 11:22:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:56.066 11:22:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:56.066 11:22:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:56.066 11:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:56.066 ************************************ 00:38:56.066 START TEST nvmf_target_core_interrupt_mode 00:38:56.066 ************************************ 00:38:56.066 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:56.325 * Looking for test storage... 00:38:56.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.325 --rc genhtml_branch_coverage=1 00:38:56.325 --rc genhtml_function_coverage=1 00:38:56.325 --rc genhtml_legend=1 00:38:56.325 --rc geninfo_all_blocks=1 00:38:56.325 --rc geninfo_unexecuted_blocks=1 00:38:56.325 00:38:56.325 ' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.325 --rc genhtml_branch_coverage=1 00:38:56.325 --rc genhtml_function_coverage=1 00:38:56.325 --rc genhtml_legend=1 00:38:56.325 --rc geninfo_all_blocks=1 00:38:56.325 --rc geninfo_unexecuted_blocks=1 00:38:56.325 00:38:56.325 ' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.325 --rc genhtml_branch_coverage=1 00:38:56.325 --rc genhtml_function_coverage=1 00:38:56.325 --rc genhtml_legend=1 00:38:56.325 --rc geninfo_all_blocks=1 00:38:56.325 --rc geninfo_unexecuted_blocks=1 00:38:56.325 00:38:56.325 ' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.325 --rc genhtml_branch_coverage=1 00:38:56.325 --rc genhtml_function_coverage=1 00:38:56.325 --rc genhtml_legend=1 00:38:56.325 --rc geninfo_all_blocks=1 00:38:56.325 --rc geninfo_unexecuted_blocks=1 00:38:56.325 00:38:56.325 ' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.325 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:56.326 ************************************ 00:38:56.326 START TEST nvmf_abort 00:38:56.326 ************************************ 00:38:56.326 11:22:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:56.588 * Looking for test storage... 00:38:56.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.588 --rc genhtml_branch_coverage=1 00:38:56.588 --rc genhtml_function_coverage=1 00:38:56.588 --rc genhtml_legend=1 00:38:56.588 --rc geninfo_all_blocks=1 00:38:56.588 --rc geninfo_unexecuted_blocks=1 00:38:56.588 00:38:56.588 ' 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.588 --rc genhtml_branch_coverage=1 00:38:56.588 --rc genhtml_function_coverage=1 00:38:56.588 --rc genhtml_legend=1 00:38:56.588 --rc geninfo_all_blocks=1 00:38:56.588 --rc geninfo_unexecuted_blocks=1 00:38:56.588 00:38:56.588 ' 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.588 --rc genhtml_branch_coverage=1 00:38:56.588 --rc genhtml_function_coverage=1 00:38:56.588 --rc genhtml_legend=1 00:38:56.588 --rc geninfo_all_blocks=1 00:38:56.588 --rc geninfo_unexecuted_blocks=1 00:38:56.588 00:38:56.588 ' 00:38:56.588 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:56.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.589 --rc genhtml_branch_coverage=1 00:38:56.589 --rc genhtml_function_coverage=1 00:38:56.589 --rc genhtml_legend=1 00:38:56.589 --rc geninfo_all_blocks=1 00:38:56.589 --rc geninfo_unexecuted_blocks=1 00:38:56.589 00:38:56.589 ' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@280 -- # nvmf_veth_init 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@223 -- # create_target_ns 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # create_main_bridge 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@105 -- # delete_main_bridge 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:38:56.589 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator0 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target0 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0 up 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target0_br 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:38:56.590 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target0 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:38:56.849 10.0.0.1 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:38:56.849 10.0.0.2 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator0 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target0_br 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:38:56.849 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator1 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target1 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1 up 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target1 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772163 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:38:56.850 10.0.0.3 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772164 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:38:56.850 10.0.0.4 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator1 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:38:56.850 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target1_br 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 2 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:57.109 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:57.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:57.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:38:57.110 00:38:57.110 --- 10.0.0.1 ping statistics --- 00:38:57.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.110 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:57.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:57.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:38:57.110 00:38:57.110 --- 10.0.0.2 ping statistics --- 00:38:57.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.110 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:38:57.110 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:57.110 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:38:57.110 00:38:57.110 --- 10.0.0.3 ping statistics --- 00:38:57.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.110 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:38:57.110 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:57.110 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:38:57.110 00:38:57.110 --- 10.0.0.4 ping statistics --- 00:38:57.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:57.110 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # return 0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:57.110 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:57.111 ' 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:57.111 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=98850 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 98850 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 98850 ']' 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.369 11:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:57.369 [2024-12-05 11:22:21.820539] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:57.369 [2024-12-05 11:22:21.821903] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:57.369 [2024-12-05 11:22:21.821971] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:57.369 [2024-12-05 11:22:21.982417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:57.627 [2024-12-05 11:22:22.071868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:57.627 [2024-12-05 11:22:22.071941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:57.627 [2024-12-05 11:22:22.071957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:57.627 [2024-12-05 11:22:22.071971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:57.627 [2024-12-05 11:22:22.071983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:57.627 [2024-12-05 11:22:22.073575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:57.627 [2024-12-05 11:22:22.073681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:57.627 [2024-12-05 11:22:22.073684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.627 [2024-12-05 11:22:22.214457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:57.628 [2024-12-05 11:22:22.215435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:57.628 [2024-12-05 11:22:22.215445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:57.628 [2024-12-05 11:22:22.216365] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 [2024-12-05 11:22:22.939420] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 Malloc0 00:38:58.564 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 Delay0 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 [2024-12-05 11:22:23.039678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:58.564 11:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:58.824 [2024-12-05 11:22:23.218455] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:00.737 Initializing NVMe Controllers 00:39:00.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:00.737 controller IO queue size 128 less than required 00:39:00.737 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:39:00.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:39:00.737 Initialization complete. Launching workers. 00:39:00.737 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36049 00:39:00.737 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36106, failed to submit 66 00:39:00.737 success 36049, unsuccessful 57, failed 0 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:00.737 rmmod nvme_tcp 00:39:00.737 rmmod nvme_fabrics 00:39:00.737 rmmod nvme_keyring 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 98850 ']' 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 98850 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 98850 ']' 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 98850 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:00.737 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98850 00:39:01.001 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:01.001 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:01.001 killing process with pid 98850 00:39:01.001 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98850' 00:39:01.001 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 98850 00:39:01.001 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 98850 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:01.259 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:39:01.260 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:39:01.518 00:39:01.518 real 0m5.022s 00:39:01.518 user 0m9.355s 00:39:01.518 sys 0m1.892s 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:01.518 ************************************ 00:39:01.518 END TEST nvmf_abort 00:39:01.518 ************************************ 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:01.518 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:01.519 ************************************ 00:39:01.519 START TEST nvmf_ns_hotplug_stress 00:39:01.519 ************************************ 00:39:01.519 11:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:39:01.519 * Looking for test storage... 00:39:01.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:01.519 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:01.519 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:39:01.519 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:01.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.779 --rc genhtml_branch_coverage=1 00:39:01.779 --rc genhtml_function_coverage=1 00:39:01.779 --rc genhtml_legend=1 00:39:01.779 --rc geninfo_all_blocks=1 00:39:01.779 --rc geninfo_unexecuted_blocks=1 00:39:01.779 00:39:01.779 ' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:01.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.779 --rc genhtml_branch_coverage=1 00:39:01.779 --rc genhtml_function_coverage=1 00:39:01.779 --rc genhtml_legend=1 00:39:01.779 --rc geninfo_all_blocks=1 00:39:01.779 --rc geninfo_unexecuted_blocks=1 00:39:01.779 00:39:01.779 ' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:01.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.779 --rc genhtml_branch_coverage=1 00:39:01.779 --rc genhtml_function_coverage=1 00:39:01.779 --rc genhtml_legend=1 00:39:01.779 --rc geninfo_all_blocks=1 00:39:01.779 --rc geninfo_unexecuted_blocks=1 00:39:01.779 00:39:01.779 ' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:01.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:01.779 --rc genhtml_branch_coverage=1 00:39:01.779 --rc genhtml_function_coverage=1 00:39:01.779 --rc genhtml_legend=1 00:39:01.779 --rc geninfo_all_blocks=1 00:39:01.779 --rc geninfo_unexecuted_blocks=1 00:39:01.779 00:39:01.779 ' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:39:01.779 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@280 -- # nvmf_veth_init 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@223 -- # create_target_ns 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # create_main_bridge 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@105 -- # delete_main_bridge 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator0 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target0 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0 up 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target0_br 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.780 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target0 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:01.781 10.0.0.1 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:01.781 10.0.0.2 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator0 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target0_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator1 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:39:01.781 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target1 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1 up 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target1_br 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target1 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772163 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:39:02.041 10.0.0.3 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:39:02.041 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772164 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:39:02.042 10.0.0.4 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target1_br 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 2 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:02.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:02.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:39:02.042 00:39:02.042 --- 10.0.0.1 ping statistics --- 00:39:02.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.042 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:02.042 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:02.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:02.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:39:02.042 00:39:02.043 --- 10.0.0.2 ping statistics --- 00:39:02.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.043 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:39:02.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:02.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:39:02.043 00:39:02.043 --- 10.0.0.3 ping statistics --- 00:39:02.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.043 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:39:02.043 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:02.043 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.125 ms 00:39:02.043 00:39:02.043 --- 10.0.0.4 ping statistics --- 00:39:02.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.043 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # return 0 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:39:02.043 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:39:02.302 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:02.303 ' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=99168 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 99168 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 99168 ']' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:02.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:02.303 11:22:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:02.303 [2024-12-05 11:22:26.855567] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:02.303 [2024-12-05 11:22:26.856994] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:02.303 [2024-12-05 11:22:26.857071] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.562 [2024-12-05 11:22:27.006003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:02.562 [2024-12-05 11:22:27.069502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:02.562 [2024-12-05 11:22:27.069566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:02.562 [2024-12-05 11:22:27.069576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:02.562 [2024-12-05 11:22:27.069584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:02.562 [2024-12-05 11:22:27.069600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:02.562 [2024-12-05 11:22:27.070977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:02.562 [2024-12-05 11:22:27.071080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:02.562 [2024-12-05 11:22:27.071077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.562 [2024-12-05 11:22:27.202801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:02.562 [2024-12-05 11:22:27.203859] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:02.562 [2024-12-05 11:22:27.203930] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:02.562 [2024-12-05 11:22:27.204980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:03.131 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:03.131 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:39:03.131 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:03.131 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:03.131 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:03.390 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.390 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:03.390 11:22:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:03.649 [2024-12-05 11:22:28.112965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.649 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:03.908 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:04.166 [2024-12-05 11:22:28.621496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:04.166 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:04.424 11:22:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:04.424 Malloc0 00:39:04.742 11:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:04.742 Delay0 00:39:04.742 11:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.018 11:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:05.276 NULL1 00:39:05.276 11:22:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:05.534 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:05.534 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=99299 00:39:05.534 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:05.534 11:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.912 Read completed with error (sct=0, sc=11) 00:39:06.912 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:06.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:06.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:07.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:07.171 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:07.171 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:07.429 true 00:39:07.429 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:07.429 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.366 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.366 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:08.366 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:08.623 true 00:39:08.623 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:08.623 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.880 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.136 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:09.136 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:09.136 true 00:39:09.137 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:09.137 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.067 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.324 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:10.324 11:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:10.583 true 00:39:10.583 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:10.583 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.843 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.100 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:11.100 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:11.357 true 00:39:11.357 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:11.357 11:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:12.289 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.547 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:12.547 11:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:12.805 true 00:39:12.805 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:12.805 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.078 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.078 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:13.078 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:13.340 true 00:39:13.340 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:13.340 11:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.276 11:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.534 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:14.534 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:14.792 true 00:39:14.792 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:14.792 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.051 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.051 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:15.051 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:15.310 true 00:39:15.310 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:15.310 11:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.244 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.502 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:16.502 11:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:16.759 true 00:39:16.759 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:16.759 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.016 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.273 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:17.273 11:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:17.530 true 00:39:17.530 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:17.530 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.788 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.046 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:18.046 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:18.304 true 00:39:18.304 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:18.304 11:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.240 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.498 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:19.498 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:19.757 true 00:39:19.757 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:19.757 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.016 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.274 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:20.274 11:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:20.534 true 00:39:20.534 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:20.534 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.793 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.051 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:21.051 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:21.310 true 00:39:21.310 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:21.310 11:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:22.247 11:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:22.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:22.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:22.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:22.765 11:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:22.765 11:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:23.024 true 00:39:23.024 11:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:23.024 11:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.590 11:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.848 11:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:23.848 11:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:24.106 true 00:39:24.364 11:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:24.364 11:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.364 11:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.623 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:24.623 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:24.881 true 00:39:24.881 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:24.881 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.140 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.398 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:25.398 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:25.398 true 00:39:25.398 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:25.398 11:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.837 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:39:26.837 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:26.837 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:27.097 true 00:39:27.097 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:27.097 11:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.036 11:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.295 11:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:28.295 11:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:28.295 true 00:39:28.555 11:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:28.555 11:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.555 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.814 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:28.814 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:29.073 true 00:39:29.073 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:29.073 11:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.012 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.272 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:30.272 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:30.272 true 00:39:30.530 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:30.530 11:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.530 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.788 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:30.788 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:31.046 true 00:39:31.046 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:31.046 11:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.978 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.236 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:32.236 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:32.492 true 00:39:32.492 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:32.492 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.750 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.010 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:33.010 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:33.010 true 00:39:33.010 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:33.010 11:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.946 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:34.203 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:34.203 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:34.462 true 00:39:34.462 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:34.462 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.721 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:34.979 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:34.979 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:35.238 true 00:39:35.238 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:35.238 11:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.497 11:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:35.755 11:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:35.755 11:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:35.755 Initializing NVMe Controllers 00:39:35.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:35.755 Controller IO queue size 128, less than required. 00:39:35.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:35.755 Controller IO queue size 128, less than required. 00:39:35.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:35.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:35.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:35.755 Initialization complete. Launching workers. 00:39:35.755 ======================================================== 00:39:35.755 Latency(us) 00:39:35.755 Device Information : IOPS MiB/s Average min max 00:39:35.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 661.83 0.32 101057.28 3219.26 1030434.08 00:39:35.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12884.59 6.29 9933.90 2333.51 447051.14 00:39:35.755 ======================================================== 00:39:35.755 Total : 13546.43 6.61 14385.89 2333.51 1030434.08 00:39:35.755 00:39:36.014 true 00:39:36.014 11:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99299 00:39:36.014 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (99299) - No such process 00:39:36.014 11:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 99299 00:39:36.014 11:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.273 11:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:36.532 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:36.532 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:36.532 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:36.532 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:36.532 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:36.790 null0 00:39:36.790 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:36.790 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:36.790 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:37.049 null1 00:39:37.049 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.049 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.049 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:37.049 null2 00:39:37.049 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.049 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.049 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:37.308 null3 00:39:37.308 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.308 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.308 11:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:37.567 null4 00:39:37.567 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.567 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.567 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:37.827 null5 00:39:37.827 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.827 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.827 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:38.086 null6 00:39:38.086 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:38.086 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:38.086 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:38.345 null7 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.345 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:38.346 11:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 100314 100315 100318 100319 100321 100323 100325 100328 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:38.605 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:38.864 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.122 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.381 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.382 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.382 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.382 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.382 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:39.382 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.382 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.382 11:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.641 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.900 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:40.160 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:40.420 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.420 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:40.420 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:40.420 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:40.420 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.420 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.420 11:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.420 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:40.677 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.933 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:41.190 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.448 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.448 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.448 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.448 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.448 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.707 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:41.707 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.707 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.707 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.707 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.707 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.708 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.708 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.708 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.708 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.708 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.967 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:42.227 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.486 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.486 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:42.745 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.004 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:43.005 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.265 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.525 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.525 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.525 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.525 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.525 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.525 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.525 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.525 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.525 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.525 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.525 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:43.784 rmmod nvme_tcp 00:39:43.784 rmmod nvme_fabrics 00:39:43.784 rmmod nvme_keyring 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 99168 ']' 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 99168 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 99168 ']' 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 99168 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99168 00:39:43.784 killing process with pid 99168 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99168' 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 99168 00:39:43.784 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 99168 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:44.352 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:39:44.353 00:39:44.353 real 0m42.902s 00:39:44.353 user 3m0.903s 00:39:44.353 sys 0m21.635s 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:44.353 ************************************ 00:39:44.353 END TEST nvmf_ns_hotplug_stress 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:44.353 ************************************ 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:44.353 ************************************ 00:39:44.353 START TEST nvmf_delete_subsystem 00:39:44.353 ************************************ 00:39:44.353 11:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:44.614 * Looking for test storage... 00:39:44.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.614 --rc genhtml_branch_coverage=1 00:39:44.614 --rc genhtml_function_coverage=1 00:39:44.614 --rc genhtml_legend=1 00:39:44.614 --rc geninfo_all_blocks=1 00:39:44.614 --rc geninfo_unexecuted_blocks=1 00:39:44.614 00:39:44.614 ' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.614 --rc genhtml_branch_coverage=1 00:39:44.614 --rc genhtml_function_coverage=1 00:39:44.614 --rc genhtml_legend=1 00:39:44.614 --rc geninfo_all_blocks=1 00:39:44.614 --rc geninfo_unexecuted_blocks=1 00:39:44.614 00:39:44.614 ' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.614 --rc genhtml_branch_coverage=1 00:39:44.614 --rc genhtml_function_coverage=1 00:39:44.614 --rc genhtml_legend=1 00:39:44.614 --rc geninfo_all_blocks=1 00:39:44.614 --rc geninfo_unexecuted_blocks=1 00:39:44.614 00:39:44.614 ' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.614 --rc genhtml_branch_coverage=1 00:39:44.614 --rc genhtml_function_coverage=1 00:39:44.614 --rc genhtml_legend=1 00:39:44.614 --rc geninfo_all_blocks=1 00:39:44.614 --rc geninfo_unexecuted_blocks=1 00:39:44.614 00:39:44.614 ' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:44.614 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@280 -- # nvmf_veth_init 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@223 -- # create_target_ns 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # create_main_bridge 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@105 -- # delete_main_bridge 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator0 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:39:44.615 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target0 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0 up 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target0_br 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target0 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:39:44.616 10.0.0.1 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:39:44.616 10.0.0.2 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator0 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.616 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target0_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator1 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target1 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:39:44.877 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1 up 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target1_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772163 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:39:44.878 10.0.0.3 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772164 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:39:44.878 10.0.0.4 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target1_br 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 2 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:44.878 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:44.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:44.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:39:44.879 00:39:44.879 --- 10.0.0.1 ping statistics --- 00:39:44.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.879 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:44.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:44.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:39:44.879 00:39:44.879 --- 10.0.0.2 ping statistics --- 00:39:44.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:44.879 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:39:44.879 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:39:45.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:45.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:39:45.139 00:39:45.139 --- 10.0.0.3 ping statistics --- 00:39:45.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.139 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:45.139 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:39:45.140 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:45.140 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:39:45.140 00:39:45.140 --- 10.0.0.4 ping statistics --- 00:39:45.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.140 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # return 0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:45.140 ' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=101726 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 101726 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:45.140 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 101726 ']' 00:39:45.141 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.141 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:45.141 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.141 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:45.141 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.141 [2024-12-05 11:23:09.742512] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:45.141 [2024-12-05 11:23:09.743893] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:45.141 [2024-12-05 11:23:09.743967] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.400 [2024-12-05 11:23:09.893213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:45.400 [2024-12-05 11:23:09.935695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:45.400 [2024-12-05 11:23:09.935741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:45.400 [2024-12-05 11:23:09.935752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:45.400 [2024-12-05 11:23:09.935760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:45.400 [2024-12-05 11:23:09.935767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:45.400 [2024-12-05 11:23:09.936669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.400 [2024-12-05 11:23:09.936672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.400 [2024-12-05 11:23:10.008985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:45.400 [2024-12-05 11:23:10.010037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:45.400 [2024-12-05 11:23:10.010224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:45.400 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:45.400 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:45.400 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:45.400 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:45.400 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 [2024-12-05 11:23:10.110164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 [2024-12-05 11:23:10.134421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 NULL1 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 Delay0 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=101758 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:45.675 11:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:45.964 [2024-12-05 11:23:10.351027] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:47.868 11:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:47.868 11:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:47.868 11:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.868 Read completed with error (sct=0, sc=8) 00:39:47.868 starting I/O failed: -6 00:39:47.868 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 [2024-12-05 11:23:12.391983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22c30 is same with the state(6) to be set 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 [2024-12-05 11:23:12.392677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f237e0 is same with the state(6) to be set 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 [2024-12-05 11:23:12.394569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4980000c40 is same with the state(6) to be set 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Write completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 Read completed with error (sct=0, sc=8) 00:39:47.869 starting I/O failed: -6 00:39:47.870 Write completed with error (sct=0, sc=8) 00:39:47.870 Write completed with error (sct=0, sc=8) 00:39:47.870 starting I/O failed: -6 00:39:47.870 Read completed with error (sct=0, sc=8) 00:39:47.870 Write completed with error (sct=0, sc=8) 00:39:47.870 starting I/O failed: -6 00:39:47.870 Read completed with error (sct=0, sc=8) 00:39:47.870 [2024-12-05 11:23:12.395957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f498000d4b0 is same with the state(6) to be set 00:39:48.808 [2024-12-05 11:23:13.365671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17aa0 is same with the state(6) to be set 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 [2024-12-05 11:23:13.392228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f498000d020 is same with the state(6) to be set 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 [2024-12-05 11:23:13.392925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f498000d7e0 is same with the state(6) to be set 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 [2024-12-05 11:23:13.393754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f22a50 is same with the state(6) to be set 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Write completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 Read completed with error (sct=0, sc=8) 00:39:48.808 [2024-12-05 11:23:13.394259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f25ea0 is same with the state(6) to be set 00:39:48.808 Initializing NVMe Controllers 00:39:48.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:48.808 Controller IO queue size 128, less than required. 00:39:48.808 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:48.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:48.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:48.808 Initialization complete. Launching workers. 00:39:48.808 ======================================================== 00:39:48.808 Latency(us) 00:39:48.809 Device Information : IOPS MiB/s Average min max 00:39:48.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.26 0.09 883652.68 720.53 1015961.73 00:39:48.809 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 191.12 0.09 892372.98 1672.21 1016435.89 00:39:48.809 ======================================================== 00:39:48.809 Total : 367.38 0.18 888189.12 720.53 1016435.89 00:39:48.809 00:39:48.809 [2024-12-05 11:23:13.394862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17aa0 (9): Bad file descriptor 00:39:48.809 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:48.809 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.809 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:48.809 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101758 00:39:48.809 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101758 00:39:49.377 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (101758) - No such process 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 101758 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 101758 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 101758 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:49.377 [2024-12-05 11:23:13.922735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=101806 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:49.377 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:49.636 [2024-12-05 11:23:14.099772] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:49.894 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:49.895 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:49.895 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:50.462 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:50.462 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:50.462 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.027 11:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.027 11:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:51.027 11:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.592 11:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.592 11:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:51.592 11:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.866 11:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.866 11:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:51.866 11:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:52.433 11:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:52.433 11:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:52.433 11:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:52.693 Initializing NVMe Controllers 00:39:52.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:52.693 Controller IO queue size 128, less than required. 00:39:52.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:52.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:52.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:52.693 Initialization complete. Launching workers. 00:39:52.693 ======================================================== 00:39:52.693 Latency(us) 00:39:52.693 Device Information : IOPS MiB/s Average min max 00:39:52.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004697.90 1000192.50 1018462.63 00:39:52.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007016.07 1000607.75 1018866.61 00:39:52.693 ======================================================== 00:39:52.693 Total : 256.00 0.12 1005856.98 1000192.50 1018866.61 00:39:52.693 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101806 00:39:52.952 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (101806) - No such process 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 101806 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:52.952 rmmod nvme_tcp 00:39:52.952 rmmod nvme_fabrics 00:39:52.952 rmmod nvme_keyring 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 101726 ']' 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 101726 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 101726 ']' 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 101726 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:52.952 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101726 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:53.212 killing process with pid 101726 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101726' 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 101726 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 101726 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:39:53.212 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:39:53.472 ************************************ 00:39:53.472 END TEST nvmf_delete_subsystem 00:39:53.472 ************************************ 00:39:53.472 00:39:53.472 real 0m9.020s 00:39:53.472 user 0m23.685s 00:39:53.472 sys 0m3.048s 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:53.472 11:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:53.472 ************************************ 00:39:53.472 START TEST nvmf_host_management 00:39:53.472 ************************************ 00:39:53.472 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:53.472 * Looking for test storage... 00:39:53.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:53.472 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:53.472 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:39:53.472 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:53.732 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:53.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.733 --rc genhtml_branch_coverage=1 00:39:53.733 --rc genhtml_function_coverage=1 00:39:53.733 --rc genhtml_legend=1 00:39:53.733 --rc geninfo_all_blocks=1 00:39:53.733 --rc geninfo_unexecuted_blocks=1 00:39:53.733 00:39:53.733 ' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:53.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.733 --rc genhtml_branch_coverage=1 00:39:53.733 --rc genhtml_function_coverage=1 00:39:53.733 --rc genhtml_legend=1 00:39:53.733 --rc geninfo_all_blocks=1 00:39:53.733 --rc geninfo_unexecuted_blocks=1 00:39:53.733 00:39:53.733 ' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:53.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.733 --rc genhtml_branch_coverage=1 00:39:53.733 --rc genhtml_function_coverage=1 00:39:53.733 --rc genhtml_legend=1 00:39:53.733 --rc geninfo_all_blocks=1 00:39:53.733 --rc geninfo_unexecuted_blocks=1 00:39:53.733 00:39:53.733 ' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:53.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:53.733 --rc genhtml_branch_coverage=1 00:39:53.733 --rc genhtml_function_coverage=1 00:39:53.733 --rc genhtml_legend=1 00:39:53.733 --rc geninfo_all_blocks=1 00:39:53.733 --rc geninfo_unexecuted_blocks=1 00:39:53.733 00:39:53.733 ' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@280 -- # nvmf_veth_init 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@223 -- # create_target_ns 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:53.733 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # create_main_bridge 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@105 -- # delete_main_bridge 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator0 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target0 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0 up 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target0_br 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target0 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:39:53.734 10.0.0.1 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:39:53.734 10.0.0.2 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator0 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:39:53.734 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:39:53.735 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target0_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator1 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target1 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1 up 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target1_br 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target1 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772163 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:39:53.995 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:39:53.996 10.0.0.3 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772164 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:39:53.996 10.0.0.4 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator1 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target1_br 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 2 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:53.996 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:53.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:53.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:39:53.997 00:39:53.997 --- 10.0.0.1 ping statistics --- 00:39:53.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.997 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:53.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:53.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:39:53.997 00:39:53.997 --- 10.0.0.2 ping statistics --- 00:39:53.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.997 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:39:53.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:53.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:39:53.997 00:39:53.997 --- 10.0.0.3 ping statistics --- 00:39:53.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.997 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:39:53.997 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:39:53.998 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:53.998 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:39:53.998 00:39:53.998 --- 10.0.0.4 ping statistics --- 00:39:53.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.998 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # return 0 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:53.998 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:39:54.257 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:54.258 ' 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=102090 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 102090 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 102090 ']' 00:39:54.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.258 11:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:54.258 [2024-12-05 11:23:18.806976] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:54.258 [2024-12-05 11:23:18.808386] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:54.258 [2024-12-05 11:23:18.808459] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:54.517 [2024-12-05 11:23:18.962876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:54.517 [2024-12-05 11:23:19.042605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:54.517 [2024-12-05 11:23:19.042804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:54.517 [2024-12-05 11:23:19.043000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:54.517 [2024-12-05 11:23:19.043050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:54.517 [2024-12-05 11:23:19.043108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:54.517 [2024-12-05 11:23:19.044701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:54.517 [2024-12-05 11:23:19.044795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:54.517 [2024-12-05 11:23:19.044838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:54.517 [2024-12-05 11:23:19.044840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.776 [2024-12-05 11:23:19.178772] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:54.776 [2024-12-05 11:23:19.178993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:54.776 [2024-12-05 11:23:19.179677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:54.776 [2024-12-05 11:23:19.179771] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:54.776 [2024-12-05 11:23:19.180785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:55.344 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.345 [2024-12-05 11:23:19.918296] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.345 11:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.345 Malloc0 00:39:55.604 [2024-12-05 11:23:20.010618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=102162 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 102162 /var/tmp/bdevperf.sock 00:39:55.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 102162 ']' 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:55.604 { 00:39:55.604 "params": { 00:39:55.604 "name": "Nvme$subsystem", 00:39:55.604 "trtype": "$TEST_TRANSPORT", 00:39:55.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.604 "adrfam": "ipv4", 00:39:55.604 "trsvcid": "$NVMF_PORT", 00:39:55.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.604 "hdgst": ${hdgst:-false}, 00:39:55.604 "ddgst": ${ddgst:-false} 00:39:55.604 }, 00:39:55.604 "method": "bdev_nvme_attach_controller" 00:39:55.604 } 00:39:55.604 EOF 00:39:55.604 )") 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:39:55.604 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:55.604 "params": { 00:39:55.604 "name": "Nvme0", 00:39:55.604 "trtype": "tcp", 00:39:55.604 "traddr": "10.0.0.2", 00:39:55.604 "adrfam": "ipv4", 00:39:55.604 "trsvcid": "4420", 00:39:55.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:55.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:55.604 "hdgst": false, 00:39:55.604 "ddgst": false 00:39:55.604 }, 00:39:55.604 "method": "bdev_nvme_attach_controller" 00:39:55.604 }' 00:39:55.604 [2024-12-05 11:23:20.122581] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:55.604 [2024-12-05 11:23:20.123196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102162 ] 00:39:55.863 [2024-12-05 11:23:20.280384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.863 [2024-12-05 11:23:20.339088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.863 Running I/O for 10 seconds... 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:39:56.122 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=677 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 677 -ge 100 ']' 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.383 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.383 [2024-12-05 11:23:20.930333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.383 [2024-12-05 11:23:20.930659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.930751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7905e0 is same with the state(6) to be set 00:39:56.384 [2024-12-05 11:23:20.931498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.931982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.931992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.384 [2024-12-05 11:23:20.932245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.384 [2024-12-05 11:23:20.932254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.385 [2024-12-05 11:23:20.932869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.932899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:39:56.385 [2024-12-05 11:23:20.933929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:56.385 task offset: 98304 on job bdev=Nvme0n1 fails 00:39:56.385 00:39:56.385 Latency(us) 00:39:56.385 [2024-12-05T11:23:21.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.385 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.385 Job: Nvme0n1 ended in about 0.42 seconds with error 00:39:56.385 Verification LBA range: start 0x0 length 0x400 00:39:56.385 Nvme0n1 : 0.42 1812.03 113.25 151.00 0.00 31405.10 2044.10 39696.09 00:39:56.385 [2024-12-05T11:23:21.037Z] =================================================================================================================== 00:39:56.385 [2024-12-05T11:23:21.037Z] Total : 1812.03 113.25 151.00 0.00 31405.10 2044.10 39696.09 00:39:56.385 [2024-12-05 11:23:20.935697] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:56.385 [2024-12-05 11:23:20.935719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138a660 (9): Bad file descriptor 00:39:56.385 [2024-12-05 11:23:20.936655] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:39:56.385 [2024-12-05 11:23:20.936745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:39:56.385 [2024-12-05 11:23:20.936767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.385 [2024-12-05 11:23:20.936785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:39:56.385 [2024-12-05 11:23:20.936796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:39:56.385 [2024-12-05 11:23:20.936806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:39:56.385 [2024-12-05 11:23:20.936815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x138a660 00:39:56.385 [2024-12-05 11:23:20.936845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138a660 (9): Bad file descriptor 00:39:56.386 [2024-12-05 11:23:20.936861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:39:56.386 [2024-12-05 11:23:20.936872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:39:56.386 [2024-12-05 11:23:20.936883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:39:56.386 [2024-12-05 11:23:20.936894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:39:56.386 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.386 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:56.386 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.386 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.386 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.386 11:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 102162 00:39:57.323 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (102162) - No such process 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:57.323 { 00:39:57.323 "params": { 00:39:57.323 "name": "Nvme$subsystem", 00:39:57.323 "trtype": "$TEST_TRANSPORT", 00:39:57.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:57.323 "adrfam": "ipv4", 00:39:57.323 "trsvcid": "$NVMF_PORT", 00:39:57.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:57.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:57.323 "hdgst": ${hdgst:-false}, 00:39:57.323 "ddgst": ${ddgst:-false} 00:39:57.323 }, 00:39:57.323 "method": "bdev_nvme_attach_controller" 00:39:57.323 } 00:39:57.323 EOF 00:39:57.323 )") 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:39:57.323 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:57.323 "params": { 00:39:57.323 "name": "Nvme0", 00:39:57.323 "trtype": "tcp", 00:39:57.323 "traddr": "10.0.0.2", 00:39:57.323 "adrfam": "ipv4", 00:39:57.323 "trsvcid": "4420", 00:39:57.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:57.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:57.323 "hdgst": false, 00:39:57.323 "ddgst": false 00:39:57.323 }, 00:39:57.323 "method": "bdev_nvme_attach_controller" 00:39:57.323 }' 00:39:57.582 [2024-12-05 11:23:22.015118] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:57.582 [2024-12-05 11:23:22.015232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102208 ] 00:39:57.582 [2024-12-05 11:23:22.169296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:57.582 [2024-12-05 11:23:22.219104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.841 Running I/O for 1 seconds... 00:39:58.778 1920.00 IOPS, 120.00 MiB/s 00:39:58.778 Latency(us) 00:39:58.778 [2024-12-05T11:23:23.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.778 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:58.778 Verification LBA range: start 0x0 length 0x400 00:39:58.778 Nvme0n1 : 1.02 1940.01 121.25 0.00 0.00 32475.80 5149.26 29335.16 00:39:58.778 [2024-12-05T11:23:23.430Z] =================================================================================================================== 00:39:58.778 [2024-12-05T11:23:23.430Z] Total : 1940.01 121.25 0.00 0.00 32475.80 5149.26 29335.16 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:59.037 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:59.037 rmmod nvme_tcp 00:39:59.037 rmmod nvme_fabrics 00:39:59.037 rmmod nvme_keyring 00:39:59.295 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:59.295 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:39:59.295 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:39:59.295 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 102090 ']' 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 102090 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 102090 ']' 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 102090 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102090 00:39:59.296 killing process with pid 102090 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102090' 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 102090 00:39:59.296 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 102090 00:39:59.553 [2024-12-05 11:23:24.026398] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:39:59.553 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:59.811 00:39:59.811 real 0m6.239s 00:39:59.811 user 0m17.280s 00:39:59.811 sys 0m2.637s 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:59.811 ************************************ 00:39:59.811 END TEST nvmf_host_management 00:39:59.811 ************************************ 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:59.811 ************************************ 00:39:59.811 START TEST nvmf_lvol 00:39:59.811 ************************************ 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:39:59.811 * Looking for test storage... 00:39:59.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:39:59.811 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:00.070 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:00.070 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:00.070 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.071 --rc genhtml_branch_coverage=1 00:40:00.071 --rc genhtml_function_coverage=1 00:40:00.071 --rc genhtml_legend=1 00:40:00.071 --rc geninfo_all_blocks=1 00:40:00.071 --rc geninfo_unexecuted_blocks=1 00:40:00.071 00:40:00.071 ' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.071 --rc genhtml_branch_coverage=1 00:40:00.071 --rc genhtml_function_coverage=1 00:40:00.071 --rc genhtml_legend=1 00:40:00.071 --rc geninfo_all_blocks=1 00:40:00.071 --rc geninfo_unexecuted_blocks=1 00:40:00.071 00:40:00.071 ' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.071 --rc genhtml_branch_coverage=1 00:40:00.071 --rc genhtml_function_coverage=1 00:40:00.071 --rc genhtml_legend=1 00:40:00.071 --rc geninfo_all_blocks=1 00:40:00.071 --rc geninfo_unexecuted_blocks=1 00:40:00.071 00:40:00.071 ' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:00.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.071 --rc genhtml_branch_coverage=1 00:40:00.071 --rc genhtml_function_coverage=1 00:40:00.071 --rc genhtml_legend=1 00:40:00.071 --rc geninfo_all_blocks=1 00:40:00.071 --rc geninfo_unexecuted_blocks=1 00:40:00.071 00:40:00.071 ' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:00.071 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@280 -- # nvmf_veth_init 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@223 -- # create_target_ns 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # create_main_bridge 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@105 -- # delete_main_bridge 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator0 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target0 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0 up 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target0_br 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target0 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:40:00.072 10.0.0.1 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:40:00.072 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:40:00.073 10.0.0.2 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator0 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target0_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator1 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:40:00.073 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target1 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1 up 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target1_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target1 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772163 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:40:00.332 10.0.0.3 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772164 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:40:00.332 10.0.0.4 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator1 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target1_br 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:00.332 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 2 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:00.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:00.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:40:00.333 00:40:00.333 --- 10.0.0.1 ping statistics --- 00:40:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.333 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:00.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:00.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:40:00.333 00:40:00.333 --- 10.0.0.2 ping statistics --- 00:40:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.333 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:40:00.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:00.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:40:00.333 00:40:00.333 --- 10.0.0.3 ping statistics --- 00:40:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.333 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:40:00.333 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:40:00.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:00.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.109 ms 00:40:00.333 00:40:00.333 --- 10.0.0.4 ping statistics --- 00:40:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.334 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # return 0 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:40:00.334 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:40:00.647 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:00.647 ' 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=102468 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 102468 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 102468 ']' 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.647 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:00.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.648 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.648 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:00.648 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:00.648 [2024-12-05 11:23:25.147319] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:00.648 [2024-12-05 11:23:25.148721] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:00.648 [2024-12-05 11:23:25.148953] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:00.906 [2024-12-05 11:23:25.308020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:00.906 [2024-12-05 11:23:25.372065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:00.906 [2024-12-05 11:23:25.372307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:00.906 [2024-12-05 11:23:25.372605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:00.906 [2024-12-05 11:23:25.372830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:00.906 [2024-12-05 11:23:25.372907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:00.906 [2024-12-05 11:23:25.374163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:00.906 [2024-12-05 11:23:25.374329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:00.906 [2024-12-05 11:23:25.374333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.906 [2024-12-05 11:23:25.457922] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:00.906 [2024-12-05 11:23:25.459022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:00.906 [2024-12-05 11:23:25.459610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:00.906 [2024-12-05 11:23:25.459549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:00.906 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:00.906 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:00.906 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:00.906 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:00.906 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:00.906 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:00.906 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:01.472 [2024-12-05 11:23:25.844229] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:01.472 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:01.728 11:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:01.728 11:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:01.986 11:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:01.986 11:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:02.244 11:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:02.502 11:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=626b07c2-19ed-4c05-a385-be61f0b490b7 00:40:02.502 11:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 626b07c2-19ed-4c05-a385-be61f0b490b7 lvol 20 00:40:02.761 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b42bebeb-01c3-4cd0-a553-b268e7a17258 00:40:02.761 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:03.020 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b42bebeb-01c3-4cd0-a553-b268e7a17258 00:40:03.020 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:03.279 [2024-12-05 11:23:27.848155] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:03.279 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:03.538 11:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:03.538 11:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=102603 00:40:03.538 11:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:04.474 11:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b42bebeb-01c3-4cd0-a553-b268e7a17258 MY_SNAPSHOT 00:40:05.041 11:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d440d50d-00e1-45a6-a4d2-a27f65338c06 00:40:05.041 11:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b42bebeb-01c3-4cd0-a553-b268e7a17258 30 00:40:05.041 11:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d440d50d-00e1-45a6-a4d2-a27f65338c06 MY_CLONE 00:40:05.610 11:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ad193b36-c3b6-43aa-a8b8-661d7d67da1f 00:40:05.610 11:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ad193b36-c3b6-43aa-a8b8-661d7d67da1f 00:40:06.178 11:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 102603 00:40:14.302 Initializing NVMe Controllers 00:40:14.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:14.302 Controller IO queue size 128, less than required. 00:40:14.302 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:14.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:14.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:14.302 Initialization complete. Launching workers. 00:40:14.302 ======================================================== 00:40:14.302 Latency(us) 00:40:14.302 Device Information : IOPS MiB/s Average min max 00:40:14.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8914.70 34.82 14360.18 5556.53 92828.49 00:40:14.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9785.40 38.22 13081.77 5703.55 87077.05 00:40:14.302 ======================================================== 00:40:14.302 Total : 18700.09 73.05 13691.21 5556.53 92828.49 00:40:14.302 00:40:14.302 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:14.302 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b42bebeb-01c3-4cd0-a553-b268e7a17258 00:40:14.302 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 626b07c2-19ed-4c05-a385-be61f0b490b7 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:14.560 rmmod nvme_tcp 00:40:14.560 rmmod nvme_fabrics 00:40:14.560 rmmod nvme_keyring 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 102468 ']' 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 102468 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 102468 ']' 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 102468 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:14.560 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102468 00:40:14.819 killing process with pid 102468 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102468' 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 102468 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 102468 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:14.819 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:40:15.078 ************************************ 00:40:15.078 END TEST nvmf_lvol 00:40:15.078 ************************************ 00:40:15.078 00:40:15.078 real 0m15.319s 00:40:15.078 user 0m53.746s 00:40:15.078 sys 0m7.063s 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:15.078 ************************************ 00:40:15.078 START TEST nvmf_lvs_grow 00:40:15.078 ************************************ 00:40:15.078 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:15.339 * Looking for test storage... 00:40:15.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.339 --rc genhtml_branch_coverage=1 00:40:15.339 --rc genhtml_function_coverage=1 00:40:15.339 --rc genhtml_legend=1 00:40:15.339 --rc geninfo_all_blocks=1 00:40:15.339 --rc geninfo_unexecuted_blocks=1 00:40:15.339 00:40:15.339 ' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.339 --rc genhtml_branch_coverage=1 00:40:15.339 --rc genhtml_function_coverage=1 00:40:15.339 --rc genhtml_legend=1 00:40:15.339 --rc geninfo_all_blocks=1 00:40:15.339 --rc geninfo_unexecuted_blocks=1 00:40:15.339 00:40:15.339 ' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.339 --rc genhtml_branch_coverage=1 00:40:15.339 --rc genhtml_function_coverage=1 00:40:15.339 --rc genhtml_legend=1 00:40:15.339 --rc geninfo_all_blocks=1 00:40:15.339 --rc geninfo_unexecuted_blocks=1 00:40:15.339 00:40:15.339 ' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:15.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.339 --rc genhtml_branch_coverage=1 00:40:15.339 --rc genhtml_function_coverage=1 00:40:15.339 --rc genhtml_legend=1 00:40:15.339 --rc geninfo_all_blocks=1 00:40:15.339 --rc geninfo_unexecuted_blocks=1 00:40:15.339 00:40:15.339 ' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.339 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@280 -- # nvmf_veth_init 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@223 -- # create_target_ns 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # create_main_bridge 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@105 -- # delete_main_bridge 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator0 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target0 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0 up 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target0_br 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:40:15.340 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.341 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:40:15.341 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:40:15.341 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:15.341 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target0 00:40:15.341 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:40:15.341 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:15.628 10.0.0.1 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:40:15.628 10.0.0.2 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator0 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target0_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator1 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:40:15.628 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target1 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1 up 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target1 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772163 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:40:15.629 10.0.0.3 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772164 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:40:15.629 10.0.0.4 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator1 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target1_br 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:15.629 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 2 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:15.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:15.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:40:15.890 00:40:15.890 --- 10.0.0.1 ping statistics --- 00:40:15.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.890 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:15.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:15.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:40:15.890 00:40:15.890 --- 10.0.0.2 ping statistics --- 00:40:15.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.890 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:40:15.890 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:40:15.891 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:15.891 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:40:15.891 00:40:15.891 --- 10.0.0.3 ping statistics --- 00:40:15.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.891 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:40:15.891 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:15.891 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:40:15.891 00:40:15.891 --- 10.0.0.4 ping statistics --- 00:40:15.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.891 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # return 0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:40:15.891 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:15.892 ' 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=103011 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 103011 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 103011 ']' 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:15.892 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:15.892 [2024-12-05 11:23:40.513264] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:15.892 [2024-12-05 11:23:40.514865] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:15.892 [2024-12-05 11:23:40.514934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:16.150 [2024-12-05 11:23:40.673639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.150 [2024-12-05 11:23:40.729986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:16.150 [2024-12-05 11:23:40.730047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:16.150 [2024-12-05 11:23:40.730063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:16.150 [2024-12-05 11:23:40.730075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:16.150 [2024-12-05 11:23:40.730087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:16.150 [2024-12-05 11:23:40.730436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.408 [2024-12-05 11:23:40.812226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:16.409 [2024-12-05 11:23:40.812584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:16.409 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:16.409 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:16.409 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:16.409 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:16.409 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:16.409 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.409 11:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:16.667 [2024-12-05 11:23:41.095320] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:16.667 ************************************ 00:40:16.667 START TEST lvs_grow_clean 00:40:16.667 ************************************ 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:16.667 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:16.926 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:16.926 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:17.186 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:17.186 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:17.186 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:17.444 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:17.444 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:17.444 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd lvol 150 00:40:17.703 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=78a10e93-335c-4817-af35-b871bc968921 00:40:17.703 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:17.703 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:17.962 [2024-12-05 11:23:42.559121] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:17.962 [2024-12-05 11:23:42.559278] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:17.962 true 00:40:17.962 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:17.962 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:18.221 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:18.221 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:18.480 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 78a10e93-335c-4817-af35-b871bc968921 00:40:18.739 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:18.998 [2024-12-05 11:23:43.527266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:18.998 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103158 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103158 /var/tmp/bdevperf.sock 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 103158 ']' 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:19.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:19.257 11:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:19.257 [2024-12-05 11:23:43.893140] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:19.257 [2024-12-05 11:23:43.893220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103158 ] 00:40:19.516 [2024-12-05 11:23:44.031470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.516 [2024-12-05 11:23:44.084127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.452 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:20.452 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:20.452 11:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:20.452 Nvme0n1 00:40:20.711 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:20.969 [ 00:40:20.969 { 00:40:20.969 "aliases": [ 00:40:20.969 "78a10e93-335c-4817-af35-b871bc968921" 00:40:20.969 ], 00:40:20.969 "assigned_rate_limits": { 00:40:20.969 "r_mbytes_per_sec": 0, 00:40:20.969 "rw_ios_per_sec": 0, 00:40:20.969 "rw_mbytes_per_sec": 0, 00:40:20.969 "w_mbytes_per_sec": 0 00:40:20.969 }, 00:40:20.969 "block_size": 4096, 00:40:20.969 "claimed": false, 00:40:20.969 "driver_specific": { 00:40:20.969 "mp_policy": "active_passive", 00:40:20.969 "nvme": [ 00:40:20.969 { 00:40:20.969 "ctrlr_data": { 00:40:20.969 "ana_reporting": false, 00:40:20.969 "cntlid": 1, 00:40:20.969 "firmware_revision": "25.01", 00:40:20.969 "model_number": "SPDK bdev Controller", 00:40:20.969 "multi_ctrlr": true, 00:40:20.969 "oacs": { 00:40:20.969 "firmware": 0, 00:40:20.969 "format": 0, 00:40:20.969 "ns_manage": 0, 00:40:20.969 "security": 0 00:40:20.969 }, 00:40:20.969 "serial_number": "SPDK0", 00:40:20.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:20.969 "vendor_id": "0x8086" 00:40:20.969 }, 00:40:20.969 "ns_data": { 00:40:20.969 "can_share": true, 00:40:20.969 "id": 1 00:40:20.969 }, 00:40:20.970 "trid": { 00:40:20.970 "adrfam": "IPv4", 00:40:20.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:20.970 "traddr": "10.0.0.2", 00:40:20.970 "trsvcid": "4420", 00:40:20.970 "trtype": "TCP" 00:40:20.970 }, 00:40:20.970 "vs": { 00:40:20.970 "nvme_version": "1.3" 00:40:20.970 } 00:40:20.970 } 00:40:20.970 ] 00:40:20.970 }, 00:40:20.970 "memory_domains": [ 00:40:20.970 { 00:40:20.970 "dma_device_id": "system", 00:40:20.970 "dma_device_type": 1 00:40:20.970 } 00:40:20.970 ], 00:40:20.970 "name": "Nvme0n1", 00:40:20.970 "num_blocks": 38912, 00:40:20.970 "numa_id": -1, 00:40:20.970 "product_name": "NVMe disk", 00:40:20.970 "supported_io_types": { 00:40:20.970 "abort": true, 00:40:20.970 "compare": true, 00:40:20.970 "compare_and_write": true, 00:40:20.970 "copy": true, 00:40:20.970 "flush": true, 00:40:20.970 "get_zone_info": false, 00:40:20.970 "nvme_admin": true, 00:40:20.970 "nvme_io": true, 00:40:20.970 "nvme_io_md": false, 00:40:20.970 "nvme_iov_md": false, 00:40:20.970 "read": true, 00:40:20.970 "reset": true, 00:40:20.970 "seek_data": false, 00:40:20.970 "seek_hole": false, 00:40:20.970 "unmap": true, 00:40:20.970 "write": true, 00:40:20.970 "write_zeroes": true, 00:40:20.970 "zcopy": false, 00:40:20.970 "zone_append": false, 00:40:20.970 "zone_management": false 00:40:20.970 }, 00:40:20.970 "uuid": "78a10e93-335c-4817-af35-b871bc968921", 00:40:20.970 "zoned": false 00:40:20.970 } 00:40:20.970 ] 00:40:20.970 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103207 00:40:20.970 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:20.970 11:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:20.970 Running I/O for 10 seconds... 00:40:22.342 Latency(us) 00:40:22.342 [2024-12-05T11:23:46.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:22.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:22.342 Nvme0n1 : 1.00 8521.00 33.29 0.00 0.00 0.00 0.00 0.00 00:40:22.342 [2024-12-05T11:23:46.994Z] =================================================================================================================== 00:40:22.342 [2024-12-05T11:23:46.994Z] Total : 8521.00 33.29 0.00 0.00 0.00 0.00 0.00 00:40:22.342 00:40:22.909 11:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:23.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:23.168 Nvme0n1 : 2.00 9078.00 35.46 0.00 0.00 0.00 0.00 0.00 00:40:23.168 [2024-12-05T11:23:47.820Z] =================================================================================================================== 00:40:23.168 [2024-12-05T11:23:47.820Z] Total : 9078.00 35.46 0.00 0.00 0.00 0.00 0.00 00:40:23.168 00:40:23.168 true 00:40:23.168 11:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:23.168 11:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:23.427 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:23.427 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:23.427 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 103207 00:40:23.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:23.995 Nvme0n1 : 3.00 9105.00 35.57 0.00 0.00 0.00 0.00 0.00 00:40:23.995 [2024-12-05T11:23:48.647Z] =================================================================================================================== 00:40:23.995 [2024-12-05T11:23:48.647Z] Total : 9105.00 35.57 0.00 0.00 0.00 0.00 0.00 00:40:23.995 00:40:24.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:24.930 Nvme0n1 : 4.00 9076.00 35.45 0.00 0.00 0.00 0.00 0.00 00:40:24.930 [2024-12-05T11:23:49.582Z] =================================================================================================================== 00:40:24.930 [2024-12-05T11:23:49.582Z] Total : 9076.00 35.45 0.00 0.00 0.00 0.00 0.00 00:40:24.930 00:40:26.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:26.304 Nvme0n1 : 5.00 9072.40 35.44 0.00 0.00 0.00 0.00 0.00 00:40:26.304 [2024-12-05T11:23:50.956Z] =================================================================================================================== 00:40:26.304 [2024-12-05T11:23:50.956Z] Total : 9072.40 35.44 0.00 0.00 0.00 0.00 0.00 00:40:26.304 00:40:27.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:27.239 Nvme0n1 : 6.00 9042.83 35.32 0.00 0.00 0.00 0.00 0.00 00:40:27.239 [2024-12-05T11:23:51.891Z] =================================================================================================================== 00:40:27.239 [2024-12-05T11:23:51.891Z] Total : 9042.83 35.32 0.00 0.00 0.00 0.00 0.00 00:40:27.239 00:40:28.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:28.175 Nvme0n1 : 7.00 8998.00 35.15 0.00 0.00 0.00 0.00 0.00 00:40:28.175 [2024-12-05T11:23:52.827Z] =================================================================================================================== 00:40:28.175 [2024-12-05T11:23:52.827Z] Total : 8998.00 35.15 0.00 0.00 0.00 0.00 0.00 00:40:28.175 00:40:29.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:29.109 Nvme0n1 : 8.00 9079.12 35.47 0.00 0.00 0.00 0.00 0.00 00:40:29.109 [2024-12-05T11:23:53.761Z] =================================================================================================================== 00:40:29.109 [2024-12-05T11:23:53.761Z] Total : 9079.12 35.47 0.00 0.00 0.00 0.00 0.00 00:40:29.109 00:40:30.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:30.044 Nvme0n1 : 9.00 9141.11 35.71 0.00 0.00 0.00 0.00 0.00 00:40:30.044 [2024-12-05T11:23:54.696Z] =================================================================================================================== 00:40:30.044 [2024-12-05T11:23:54.696Z] Total : 9141.11 35.71 0.00 0.00 0.00 0.00 0.00 00:40:30.044 00:40:30.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:30.978 Nvme0n1 : 10.00 9181.80 35.87 0.00 0.00 0.00 0.00 0.00 00:40:30.978 [2024-12-05T11:23:55.630Z] =================================================================================================================== 00:40:30.978 [2024-12-05T11:23:55.630Z] Total : 9181.80 35.87 0.00 0.00 0.00 0.00 0.00 00:40:30.978 00:40:30.978 00:40:30.978 Latency(us) 00:40:30.978 [2024-12-05T11:23:55.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:30.978 Nvme0n1 : 10.01 9183.89 35.87 0.00 0.00 13932.74 6085.49 44189.99 00:40:30.978 [2024-12-05T11:23:55.630Z] =================================================================================================================== 00:40:30.978 [2024-12-05T11:23:55.630Z] Total : 9183.89 35.87 0.00 0.00 13932.74 6085.49 44189.99 00:40:30.978 { 00:40:30.978 "results": [ 00:40:30.978 { 00:40:30.978 "job": "Nvme0n1", 00:40:30.978 "core_mask": "0x2", 00:40:30.978 "workload": "randwrite", 00:40:30.978 "status": "finished", 00:40:30.978 "queue_depth": 128, 00:40:30.978 "io_size": 4096, 00:40:30.978 "runtime": 10.011658, 00:40:30.978 "iops": 9183.893417054398, 00:40:30.978 "mibps": 35.87458366036874, 00:40:30.978 "io_failed": 0, 00:40:30.978 "io_timeout": 0, 00:40:30.978 "avg_latency_us": 13932.735982300172, 00:40:30.978 "min_latency_us": 6085.4857142857145, 00:40:30.978 "max_latency_us": 44189.98857142857 00:40:30.978 } 00:40:30.978 ], 00:40:30.978 "core_count": 1 00:40:30.978 } 00:40:30.978 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103158 00:40:30.978 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 103158 ']' 00:40:30.978 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 103158 00:40:30.978 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:30.978 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:30.978 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103158 00:40:31.235 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:31.235 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:31.235 killing process with pid 103158 00:40:31.235 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103158' 00:40:31.235 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 103158 00:40:31.235 Received shutdown signal, test time was about 10.000000 seconds 00:40:31.235 00:40:31.235 Latency(us) 00:40:31.235 [2024-12-05T11:23:55.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.235 [2024-12-05T11:23:55.887Z] =================================================================================================================== 00:40:31.235 [2024-12-05T11:23:55.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:31.235 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 103158 00:40:31.235 11:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:31.801 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:32.061 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:32.061 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:32.320 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:32.321 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:32.321 11:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:32.580 [2024-12-05 11:23:56.991228] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:32.580 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:32.840 2024/12/05 11:23:57 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:40:32.840 request: 00:40:32.840 { 00:40:32.840 "method": "bdev_lvol_get_lvstores", 00:40:32.840 "params": { 00:40:32.840 "uuid": "085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd" 00:40:32.840 } 00:40:32.840 } 00:40:32.840 Got JSON-RPC error response 00:40:32.840 GoRPCClient: error on JSON-RPC call 00:40:32.840 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:32.840 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:32.840 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:32.840 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:32.840 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:33.099 aio_bdev 00:40:33.099 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 78a10e93-335c-4817-af35-b871bc968921 00:40:33.099 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=78a10e93-335c-4817-af35-b871bc968921 00:40:33.099 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:33.099 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:33.099 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:33.099 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:33.099 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:33.358 11:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78a10e93-335c-4817-af35-b871bc968921 -t 2000 00:40:33.358 [ 00:40:33.358 { 00:40:33.358 "aliases": [ 00:40:33.358 "lvs/lvol" 00:40:33.358 ], 00:40:33.358 "assigned_rate_limits": { 00:40:33.358 "r_mbytes_per_sec": 0, 00:40:33.358 "rw_ios_per_sec": 0, 00:40:33.358 "rw_mbytes_per_sec": 0, 00:40:33.358 "w_mbytes_per_sec": 0 00:40:33.358 }, 00:40:33.358 "block_size": 4096, 00:40:33.359 "claimed": false, 00:40:33.359 "driver_specific": { 00:40:33.359 "lvol": { 00:40:33.359 "base_bdev": "aio_bdev", 00:40:33.359 "clone": false, 00:40:33.359 "esnap_clone": false, 00:40:33.359 "lvol_store_uuid": "085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd", 00:40:33.359 "num_allocated_clusters": 38, 00:40:33.359 "snapshot": false, 00:40:33.359 "thin_provision": false 00:40:33.359 } 00:40:33.359 }, 00:40:33.359 "name": "78a10e93-335c-4817-af35-b871bc968921", 00:40:33.359 "num_blocks": 38912, 00:40:33.359 "product_name": "Logical Volume", 00:40:33.359 "supported_io_types": { 00:40:33.359 "abort": false, 00:40:33.359 "compare": false, 00:40:33.359 "compare_and_write": false, 00:40:33.359 "copy": false, 00:40:33.359 "flush": false, 00:40:33.359 "get_zone_info": false, 00:40:33.359 "nvme_admin": false, 00:40:33.359 "nvme_io": false, 00:40:33.359 "nvme_io_md": false, 00:40:33.359 "nvme_iov_md": false, 00:40:33.359 "read": true, 00:40:33.359 "reset": true, 00:40:33.359 "seek_data": true, 00:40:33.359 "seek_hole": true, 00:40:33.359 "unmap": true, 00:40:33.359 "write": true, 00:40:33.359 "write_zeroes": true, 00:40:33.359 "zcopy": false, 00:40:33.359 "zone_append": false, 00:40:33.359 "zone_management": false 00:40:33.359 }, 00:40:33.359 "uuid": "78a10e93-335c-4817-af35-b871bc968921", 00:40:33.359 "zoned": false 00:40:33.359 } 00:40:33.359 ] 00:40:33.618 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:33.618 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:33.618 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:33.915 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:33.915 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:33.915 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:34.204 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:34.204 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 78a10e93-335c-4817-af35-b871bc968921 00:40:34.464 11:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 085ec0b4-3d1b-4cbd-a6d1-aba2deb055bd 00:40:34.722 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:34.982 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:35.549 ************************************ 00:40:35.550 END TEST lvs_grow_clean 00:40:35.550 ************************************ 00:40:35.550 00:40:35.550 real 0m18.858s 00:40:35.550 user 0m17.327s 00:40:35.550 sys 0m3.084s 00:40:35.550 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:35.550 11:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:35.550 ************************************ 00:40:35.550 START TEST lvs_grow_dirty 00:40:35.550 ************************************ 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:35.550 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:35.809 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:35.809 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:36.068 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:36.068 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:36.068 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:36.328 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:36.328 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:36.328 11:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b lvol 150 00:40:36.587 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=33fe6a01-46c4-42c5-8fbc-fc5251180b57 00:40:36.587 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:36.587 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:36.846 [2024-12-05 11:24:01.307186] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:36.847 [2024-12-05 11:24:01.307423] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:36.847 true 00:40:36.847 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:36.847 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:37.106 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:37.106 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:37.365 11:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 33fe6a01-46c4-42c5-8fbc-fc5251180b57 00:40:37.625 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:37.625 [2024-12-05 11:24:02.211242] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:37.625 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:37.883 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103594 00:40:37.883 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:37.883 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103594 /var/tmp/bdevperf.sock 00:40:37.883 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103594 ']' 00:40:37.884 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:37.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:37.884 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:37.884 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:37.884 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:37.884 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:37.884 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:38.141 [2024-12-05 11:24:02.566071] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:38.141 [2024-12-05 11:24:02.566169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103594 ] 00:40:38.141 [2024-12-05 11:24:02.711202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.141 [2024-12-05 11:24:02.773784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:38.398 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:38.398 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:38.398 11:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:38.655 Nvme0n1 00:40:38.655 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:38.913 [ 00:40:38.913 { 00:40:38.913 "aliases": [ 00:40:38.913 "33fe6a01-46c4-42c5-8fbc-fc5251180b57" 00:40:38.913 ], 00:40:38.913 "assigned_rate_limits": { 00:40:38.913 "r_mbytes_per_sec": 0, 00:40:38.913 "rw_ios_per_sec": 0, 00:40:38.913 "rw_mbytes_per_sec": 0, 00:40:38.913 "w_mbytes_per_sec": 0 00:40:38.913 }, 00:40:38.913 "block_size": 4096, 00:40:38.913 "claimed": false, 00:40:38.913 "driver_specific": { 00:40:38.913 "mp_policy": "active_passive", 00:40:38.913 "nvme": [ 00:40:38.913 { 00:40:38.913 "ctrlr_data": { 00:40:38.913 "ana_reporting": false, 00:40:38.913 "cntlid": 1, 00:40:38.913 "firmware_revision": "25.01", 00:40:38.913 "model_number": "SPDK bdev Controller", 00:40:38.913 "multi_ctrlr": true, 00:40:38.913 "oacs": { 00:40:38.913 "firmware": 0, 00:40:38.913 "format": 0, 00:40:38.913 "ns_manage": 0, 00:40:38.913 "security": 0 00:40:38.913 }, 00:40:38.913 "serial_number": "SPDK0", 00:40:38.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:38.913 "vendor_id": "0x8086" 00:40:38.913 }, 00:40:38.913 "ns_data": { 00:40:38.913 "can_share": true, 00:40:38.913 "id": 1 00:40:38.913 }, 00:40:38.913 "trid": { 00:40:38.913 "adrfam": "IPv4", 00:40:38.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:38.913 "traddr": "10.0.0.2", 00:40:38.913 "trsvcid": "4420", 00:40:38.913 "trtype": "TCP" 00:40:38.913 }, 00:40:38.913 "vs": { 00:40:38.913 "nvme_version": "1.3" 00:40:38.913 } 00:40:38.913 } 00:40:38.913 ] 00:40:38.913 }, 00:40:38.913 "memory_domains": [ 00:40:38.913 { 00:40:38.913 "dma_device_id": "system", 00:40:38.913 "dma_device_type": 1 00:40:38.913 } 00:40:38.913 ], 00:40:38.913 "name": "Nvme0n1", 00:40:38.913 "num_blocks": 38912, 00:40:38.913 "numa_id": -1, 00:40:38.913 "product_name": "NVMe disk", 00:40:38.913 "supported_io_types": { 00:40:38.913 "abort": true, 00:40:38.913 "compare": true, 00:40:38.913 "compare_and_write": true, 00:40:38.913 "copy": true, 00:40:38.913 "flush": true, 00:40:38.913 "get_zone_info": false, 00:40:38.913 "nvme_admin": true, 00:40:38.913 "nvme_io": true, 00:40:38.913 "nvme_io_md": false, 00:40:38.913 "nvme_iov_md": false, 00:40:38.913 "read": true, 00:40:38.913 "reset": true, 00:40:38.913 "seek_data": false, 00:40:38.913 "seek_hole": false, 00:40:38.913 "unmap": true, 00:40:38.913 "write": true, 00:40:38.913 "write_zeroes": true, 00:40:38.913 "zcopy": false, 00:40:38.913 "zone_append": false, 00:40:38.913 "zone_management": false 00:40:38.913 }, 00:40:38.913 "uuid": "33fe6a01-46c4-42c5-8fbc-fc5251180b57", 00:40:38.913 "zoned": false 00:40:38.913 } 00:40:38.913 ] 00:40:38.913 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103624 00:40:38.913 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:38.913 11:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:39.171 Running I/O for 10 seconds... 00:40:40.107 Latency(us) 00:40:40.107 [2024-12-05T11:24:04.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:40.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:40.107 Nvme0n1 : 1.00 9133.00 35.68 0.00 0.00 0.00 0.00 0.00 00:40:40.107 [2024-12-05T11:24:04.759Z] =================================================================================================================== 00:40:40.107 [2024-12-05T11:24:04.759Z] Total : 9133.00 35.68 0.00 0.00 0.00 0.00 0.00 00:40:40.107 00:40:41.043 11:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:41.043 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:41.043 Nvme0n1 : 2.00 9338.50 36.48 0.00 0.00 0.00 0.00 0.00 00:40:41.043 [2024-12-05T11:24:05.695Z] =================================================================================================================== 00:40:41.043 [2024-12-05T11:24:05.695Z] Total : 9338.50 36.48 0.00 0.00 0.00 0.00 0.00 00:40:41.043 00:40:41.301 true 00:40:41.301 11:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:41.301 11:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:41.560 11:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:41.560 11:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:41.560 11:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 103624 00:40:42.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:42.183 Nvme0n1 : 3.00 9340.33 36.49 0.00 0.00 0.00 0.00 0.00 00:40:42.183 [2024-12-05T11:24:06.835Z] =================================================================================================================== 00:40:42.183 [2024-12-05T11:24:06.835Z] Total : 9340.33 36.49 0.00 0.00 0.00 0.00 0.00 00:40:42.183 00:40:43.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:43.119 Nvme0n1 : 4.00 9378.75 36.64 0.00 0.00 0.00 0.00 0.00 00:40:43.119 [2024-12-05T11:24:07.771Z] =================================================================================================================== 00:40:43.119 [2024-12-05T11:24:07.771Z] Total : 9378.75 36.64 0.00 0.00 0.00 0.00 0.00 00:40:43.119 00:40:44.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:44.055 Nvme0n1 : 5.00 9377.00 36.63 0.00 0.00 0.00 0.00 0.00 00:40:44.055 [2024-12-05T11:24:08.708Z] =================================================================================================================== 00:40:44.056 [2024-12-05T11:24:08.708Z] Total : 9377.00 36.63 0.00 0.00 0.00 0.00 0.00 00:40:44.056 00:40:45.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:45.434 Nvme0n1 : 6.00 9299.50 36.33 0.00 0.00 0.00 0.00 0.00 00:40:45.434 [2024-12-05T11:24:10.086Z] =================================================================================================================== 00:40:45.434 [2024-12-05T11:24:10.086Z] Total : 9299.50 36.33 0.00 0.00 0.00 0.00 0.00 00:40:45.434 00:40:46.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:46.370 Nvme0n1 : 7.00 9260.00 36.17 0.00 0.00 0.00 0.00 0.00 00:40:46.370 [2024-12-05T11:24:11.022Z] =================================================================================================================== 00:40:46.370 [2024-12-05T11:24:11.022Z] Total : 9260.00 36.17 0.00 0.00 0.00 0.00 0.00 00:40:46.370 00:40:47.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:47.307 Nvme0n1 : 8.00 8772.12 34.27 0.00 0.00 0.00 0.00 0.00 00:40:47.307 [2024-12-05T11:24:11.959Z] =================================================================================================================== 00:40:47.307 [2024-12-05T11:24:11.959Z] Total : 8772.12 34.27 0.00 0.00 0.00 0.00 0.00 00:40:47.307 00:40:48.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:48.242 Nvme0n1 : 9.00 8790.56 34.34 0.00 0.00 0.00 0.00 0.00 00:40:48.242 [2024-12-05T11:24:12.894Z] =================================================================================================================== 00:40:48.242 [2024-12-05T11:24:12.894Z] Total : 8790.56 34.34 0.00 0.00 0.00 0.00 0.00 00:40:48.242 00:40:49.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:49.178 Nvme0n1 : 10.00 8847.50 34.56 0.00 0.00 0.00 0.00 0.00 00:40:49.178 [2024-12-05T11:24:13.830Z] =================================================================================================================== 00:40:49.178 [2024-12-05T11:24:13.830Z] Total : 8847.50 34.56 0.00 0.00 0.00 0.00 0.00 00:40:49.178 00:40:49.178 00:40:49.178 Latency(us) 00:40:49.178 [2024-12-05T11:24:13.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:49.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:49.178 Nvme0n1 : 10.01 8847.93 34.56 0.00 0.00 14461.74 4681.14 423424.98 00:40:49.178 [2024-12-05T11:24:13.830Z] =================================================================================================================== 00:40:49.178 [2024-12-05T11:24:13.830Z] Total : 8847.93 34.56 0.00 0.00 14461.74 4681.14 423424.98 00:40:49.178 { 00:40:49.178 "results": [ 00:40:49.178 { 00:40:49.178 "job": "Nvme0n1", 00:40:49.178 "core_mask": "0x2", 00:40:49.178 "workload": "randwrite", 00:40:49.178 "status": "finished", 00:40:49.178 "queue_depth": 128, 00:40:49.178 "io_size": 4096, 00:40:49.178 "runtime": 10.01398, 00:40:49.178 "iops": 8847.930593030942, 00:40:49.178 "mibps": 34.56222887902712, 00:40:49.178 "io_failed": 0, 00:40:49.178 "io_timeout": 0, 00:40:49.178 "avg_latency_us": 14461.741973737318, 00:40:49.178 "min_latency_us": 4681.142857142857, 00:40:49.178 "max_latency_us": 423424.9752380952 00:40:49.178 } 00:40:49.178 ], 00:40:49.178 "core_count": 1 00:40:49.178 } 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103594 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 103594 ']' 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 103594 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103594 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103594' 00:40:49.178 killing process with pid 103594 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 103594 00:40:49.178 Received shutdown signal, test time was about 10.000000 seconds 00:40:49.178 00:40:49.178 Latency(us) 00:40:49.178 [2024-12-05T11:24:13.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:49.178 [2024-12-05T11:24:13.830Z] =================================================================================================================== 00:40:49.178 [2024-12-05T11:24:13.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:49.178 11:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 103594 00:40:49.438 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:49.697 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:49.956 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:49.956 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 103011 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 103011 00:40:50.214 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 103011 Killed "${NVMF_APP[@]}" "$@" 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=103783 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 103783 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103783 ']' 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:50.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:50.214 11:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:50.473 [2024-12-05 11:24:14.869357] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:50.473 [2024-12-05 11:24:14.870302] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:50.473 [2024-12-05 11:24:14.870363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:50.473 [2024-12-05 11:24:15.021342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.473 [2024-12-05 11:24:15.077736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:50.473 [2024-12-05 11:24:15.077791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:50.473 [2024-12-05 11:24:15.077806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:50.473 [2024-12-05 11:24:15.077819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:50.473 [2024-12-05 11:24:15.077830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:50.473 [2024-12-05 11:24:15.078164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.736 [2024-12-05 11:24:15.161780] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:50.736 [2024-12-05 11:24:15.162151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:51.313 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:51.313 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:51.313 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:51.313 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.313 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:51.313 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:51.313 11:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:51.572 [2024-12-05 11:24:16.147016] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:51.572 [2024-12-05 11:24:16.147664] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:51.572 [2024-12-05 11:24:16.148045] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 33fe6a01-46c4-42c5-8fbc-fc5251180b57 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=33fe6a01-46c4-42c5-8fbc-fc5251180b57 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:51.572 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:52.139 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 33fe6a01-46c4-42c5-8fbc-fc5251180b57 -t 2000 00:40:52.139 [ 00:40:52.139 { 00:40:52.139 "aliases": [ 00:40:52.139 "lvs/lvol" 00:40:52.139 ], 00:40:52.139 "assigned_rate_limits": { 00:40:52.139 "r_mbytes_per_sec": 0, 00:40:52.139 "rw_ios_per_sec": 0, 00:40:52.139 "rw_mbytes_per_sec": 0, 00:40:52.139 "w_mbytes_per_sec": 0 00:40:52.139 }, 00:40:52.139 "block_size": 4096, 00:40:52.139 "claimed": false, 00:40:52.139 "driver_specific": { 00:40:52.139 "lvol": { 00:40:52.139 "base_bdev": "aio_bdev", 00:40:52.139 "clone": false, 00:40:52.139 "esnap_clone": false, 00:40:52.139 "lvol_store_uuid": "39bbe326-44e1-4fd8-a068-34c0aefffc7b", 00:40:52.139 "num_allocated_clusters": 38, 00:40:52.139 "snapshot": false, 00:40:52.139 "thin_provision": false 00:40:52.139 } 00:40:52.139 }, 00:40:52.139 "name": "33fe6a01-46c4-42c5-8fbc-fc5251180b57", 00:40:52.139 "num_blocks": 38912, 00:40:52.139 "product_name": "Logical Volume", 00:40:52.139 "supported_io_types": { 00:40:52.139 "abort": false, 00:40:52.139 "compare": false, 00:40:52.139 "compare_and_write": false, 00:40:52.139 "copy": false, 00:40:52.139 "flush": false, 00:40:52.139 "get_zone_info": false, 00:40:52.139 "nvme_admin": false, 00:40:52.139 "nvme_io": false, 00:40:52.139 "nvme_io_md": false, 00:40:52.139 "nvme_iov_md": false, 00:40:52.139 "read": true, 00:40:52.139 "reset": true, 00:40:52.139 "seek_data": true, 00:40:52.139 "seek_hole": true, 00:40:52.139 "unmap": true, 00:40:52.139 "write": true, 00:40:52.139 "write_zeroes": true, 00:40:52.139 "zcopy": false, 00:40:52.139 "zone_append": false, 00:40:52.139 "zone_management": false 00:40:52.139 }, 00:40:52.139 "uuid": "33fe6a01-46c4-42c5-8fbc-fc5251180b57", 00:40:52.139 "zoned": false 00:40:52.139 } 00:40:52.139 ] 00:40:52.139 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:52.139 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:52.139 11:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:52.397 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:52.397 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:52.397 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:52.656 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:52.656 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:53.224 [2024-12-05 11:24:17.586989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:53.224 2024/12/05 11:24:17 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:39bbe326-44e1-4fd8-a068-34c0aefffc7b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:40:53.224 request: 00:40:53.224 { 00:40:53.224 "method": "bdev_lvol_get_lvstores", 00:40:53.224 "params": { 00:40:53.224 "uuid": "39bbe326-44e1-4fd8-a068-34c0aefffc7b" 00:40:53.224 } 00:40:53.224 } 00:40:53.224 Got JSON-RPC error response 00:40:53.224 GoRPCClient: error on JSON-RPC call 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:53.224 11:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:53.483 aio_bdev 00:40:53.483 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 33fe6a01-46c4-42c5-8fbc-fc5251180b57 00:40:53.483 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=33fe6a01-46c4-42c5-8fbc-fc5251180b57 00:40:53.483 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:53.483 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:40:53.483 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:53.483 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:53.483 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:54.053 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 33fe6a01-46c4-42c5-8fbc-fc5251180b57 -t 2000 00:40:54.053 [ 00:40:54.053 { 00:40:54.053 "aliases": [ 00:40:54.053 "lvs/lvol" 00:40:54.053 ], 00:40:54.053 "assigned_rate_limits": { 00:40:54.053 "r_mbytes_per_sec": 0, 00:40:54.053 "rw_ios_per_sec": 0, 00:40:54.053 "rw_mbytes_per_sec": 0, 00:40:54.053 "w_mbytes_per_sec": 0 00:40:54.053 }, 00:40:54.053 "block_size": 4096, 00:40:54.053 "claimed": false, 00:40:54.053 "driver_specific": { 00:40:54.053 "lvol": { 00:40:54.053 "base_bdev": "aio_bdev", 00:40:54.053 "clone": false, 00:40:54.053 "esnap_clone": false, 00:40:54.053 "lvol_store_uuid": "39bbe326-44e1-4fd8-a068-34c0aefffc7b", 00:40:54.053 "num_allocated_clusters": 38, 00:40:54.053 "snapshot": false, 00:40:54.053 "thin_provision": false 00:40:54.053 } 00:40:54.053 }, 00:40:54.053 "name": "33fe6a01-46c4-42c5-8fbc-fc5251180b57", 00:40:54.053 "num_blocks": 38912, 00:40:54.053 "product_name": "Logical Volume", 00:40:54.053 "supported_io_types": { 00:40:54.053 "abort": false, 00:40:54.053 "compare": false, 00:40:54.053 "compare_and_write": false, 00:40:54.053 "copy": false, 00:40:54.053 "flush": false, 00:40:54.053 "get_zone_info": false, 00:40:54.053 "nvme_admin": false, 00:40:54.053 "nvme_io": false, 00:40:54.053 "nvme_io_md": false, 00:40:54.053 "nvme_iov_md": false, 00:40:54.053 "read": true, 00:40:54.053 "reset": true, 00:40:54.053 "seek_data": true, 00:40:54.053 "seek_hole": true, 00:40:54.053 "unmap": true, 00:40:54.053 "write": true, 00:40:54.053 "write_zeroes": true, 00:40:54.053 "zcopy": false, 00:40:54.053 "zone_append": false, 00:40:54.053 "zone_management": false 00:40:54.053 }, 00:40:54.053 "uuid": "33fe6a01-46c4-42c5-8fbc-fc5251180b57", 00:40:54.053 "zoned": false 00:40:54.053 } 00:40:54.053 ] 00:40:54.053 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:40:54.053 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:54.053 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:54.312 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:54.312 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:54.312 11:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:54.878 11:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:54.878 11:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 33fe6a01-46c4-42c5-8fbc-fc5251180b57 00:40:54.878 11:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 39bbe326-44e1-4fd8-a068-34c0aefffc7b 00:40:55.137 11:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:55.395 11:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:55.962 00:40:55.962 real 0m20.367s 00:40:55.962 user 0m26.510s 00:40:55.962 sys 0m8.102s 00:40:55.962 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.962 ************************************ 00:40:55.962 END TEST lvs_grow_dirty 00:40:55.962 ************************************ 00:40:55.962 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:55.962 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:55.962 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:40:55.962 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:40:55.962 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:55.963 nvmf_trace.0 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:55.963 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:56.529 rmmod nvme_tcp 00:40:56.529 rmmod nvme_fabrics 00:40:56.529 rmmod nvme_keyring 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 103783 ']' 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 103783 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 103783 ']' 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 103783 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103783 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:56.529 killing process with pid 103783 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103783' 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 103783 00:40:56.529 11:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 103783 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:40:56.788 00:40:56.788 real 0m41.709s 00:40:56.788 user 0m45.089s 00:40:56.788 sys 0m12.396s 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:56.788 ************************************ 00:40:56.788 END TEST nvmf_lvs_grow 00:40:56.788 ************************************ 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:56.788 ************************************ 00:40:56.788 START TEST nvmf_bdev_io_wait 00:40:56.788 ************************************ 00:40:56.788 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:40:57.047 * Looking for test storage... 00:40:57.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:40:57.047 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.048 --rc genhtml_branch_coverage=1 00:40:57.048 --rc genhtml_function_coverage=1 00:40:57.048 --rc genhtml_legend=1 00:40:57.048 --rc geninfo_all_blocks=1 00:40:57.048 --rc geninfo_unexecuted_blocks=1 00:40:57.048 00:40:57.048 ' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.048 --rc genhtml_branch_coverage=1 00:40:57.048 --rc genhtml_function_coverage=1 00:40:57.048 --rc genhtml_legend=1 00:40:57.048 --rc geninfo_all_blocks=1 00:40:57.048 --rc geninfo_unexecuted_blocks=1 00:40:57.048 00:40:57.048 ' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.048 --rc genhtml_branch_coverage=1 00:40:57.048 --rc genhtml_function_coverage=1 00:40:57.048 --rc genhtml_legend=1 00:40:57.048 --rc geninfo_all_blocks=1 00:40:57.048 --rc geninfo_unexecuted_blocks=1 00:40:57.048 00:40:57.048 ' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:57.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:57.048 --rc genhtml_branch_coverage=1 00:40:57.048 --rc genhtml_function_coverage=1 00:40:57.048 --rc genhtml_legend=1 00:40:57.048 --rc geninfo_all_blocks=1 00:40:57.048 --rc geninfo_unexecuted_blocks=1 00:40:57.048 00:40:57.048 ' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:40:57.048 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@280 -- # nvmf_veth_init 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@223 -- # create_target_ns 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # create_main_bridge 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@105 -- # delete_main_bridge 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:57.049 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator0 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target0 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0 up 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target0_br 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target0 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:40:57.308 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:57.309 10.0.0.1 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:40:57.309 10.0.0.2 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator0 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target0_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator1 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target1 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1 up 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target1_br 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.309 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target1 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772163 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:40:57.310 10.0.0.3 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772164 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:40:57.310 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:40:57.570 10.0.0.4 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator1 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:40:57.570 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:40:57.571 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:40:57.571 11:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target1_br 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 2 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:57.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:57.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:40:57.571 00:40:57.571 --- 10.0.0.1 ping statistics --- 00:40:57.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.571 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:57.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:57.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:40:57.571 00:40:57.571 --- 10.0.0.2 ping statistics --- 00:40:57.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.571 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.571 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:40:57.572 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:57.572 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:40:57.572 00:40:57.572 --- 10.0.0.3 ping statistics --- 00:40:57.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.572 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:40:57.572 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:57.572 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:40:57.572 00:40:57.572 --- 10.0.0.4 ping statistics --- 00:40:57.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.572 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # return 0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:40:57.572 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:57.573 ' 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:57.573 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:57.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=104260 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 104260 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 104260 ']' 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:57.831 11:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:57.831 [2024-12-05 11:24:22.287297] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:57.831 [2024-12-05 11:24:22.288710] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:57.831 [2024-12-05 11:24:22.288778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:57.832 [2024-12-05 11:24:22.449132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:58.091 [2024-12-05 11:24:22.513679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:58.091 [2024-12-05 11:24:22.513923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:58.091 [2024-12-05 11:24:22.514132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:58.091 [2024-12-05 11:24:22.514332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:58.091 [2024-12-05 11:24:22.514411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:58.091 [2024-12-05 11:24:22.515719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:58.091 [2024-12-05 11:24:22.515809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:58.091 [2024-12-05 11:24:22.515853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:58.091 [2024-12-05 11:24:22.515865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.091 [2024-12-05 11:24:22.517779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.659 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.927 [2024-12-05 11:24:23.365953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:58.927 [2024-12-05 11:24:23.366258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:58.927 [2024-12-05 11:24:23.367213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:58.927 [2024-12-05 11:24:23.367265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.927 [2024-12-05 11:24:23.374259] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.927 Malloc0 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:58.927 [2024-12-05 11:24:23.438504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=104319 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=104321 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=104323 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:40:58.927 { 00:40:58.927 "params": { 00:40:58.927 "name": "Nvme$subsystem", 00:40:58.927 "trtype": "$TEST_TRANSPORT", 00:40:58.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:58.927 "adrfam": "ipv4", 00:40:58.927 "trsvcid": "$NVMF_PORT", 00:40:58.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:58.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:58.927 "hdgst": ${hdgst:-false}, 00:40:58.927 "ddgst": ${ddgst:-false} 00:40:58.927 }, 00:40:58.927 "method": "bdev_nvme_attach_controller" 00:40:58.927 } 00:40:58.927 EOF 00:40:58.927 )") 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:40:58.927 { 00:40:58.927 "params": { 00:40:58.927 "name": "Nvme$subsystem", 00:40:58.927 "trtype": "$TEST_TRANSPORT", 00:40:58.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:58.927 "adrfam": "ipv4", 00:40:58.927 "trsvcid": "$NVMF_PORT", 00:40:58.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:58.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:58.927 "hdgst": ${hdgst:-false}, 00:40:58.927 "ddgst": ${ddgst:-false} 00:40:58.927 }, 00:40:58.927 "method": "bdev_nvme_attach_controller" 00:40:58.927 } 00:40:58.927 EOF 00:40:58.927 )") 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:40:58.927 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:40:58.928 { 00:40:58.928 "params": { 00:40:58.928 "name": "Nvme$subsystem", 00:40:58.928 "trtype": "$TEST_TRANSPORT", 00:40:58.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:58.928 "adrfam": "ipv4", 00:40:58.928 "trsvcid": "$NVMF_PORT", 00:40:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:58.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:58.928 "hdgst": ${hdgst:-false}, 00:40:58.928 "ddgst": ${ddgst:-false} 00:40:58.928 }, 00:40:58.928 "method": "bdev_nvme_attach_controller" 00:40:58.928 } 00:40:58.928 EOF 00:40:58.928 )") 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:40:58.928 { 00:40:58.928 "params": { 00:40:58.928 "name": "Nvme$subsystem", 00:40:58.928 "trtype": "$TEST_TRANSPORT", 00:40:58.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:58.928 "adrfam": "ipv4", 00:40:58.928 "trsvcid": "$NVMF_PORT", 00:40:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:58.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:58.928 "hdgst": ${hdgst:-false}, 00:40:58.928 "ddgst": ${ddgst:-false} 00:40:58.928 }, 00:40:58.928 "method": "bdev_nvme_attach_controller" 00:40:58.928 } 00:40:58.928 EOF 00:40:58.928 )") 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=104325 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:40:58.928 "params": { 00:40:58.928 "name": "Nvme1", 00:40:58.928 "trtype": "tcp", 00:40:58.928 "traddr": "10.0.0.2", 00:40:58.928 "adrfam": "ipv4", 00:40:58.928 "trsvcid": "4420", 00:40:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:58.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:58.928 "hdgst": false, 00:40:58.928 "ddgst": false 00:40:58.928 }, 00:40:58.928 "method": "bdev_nvme_attach_controller" 00:40:58.928 }' 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:40:58.928 "params": { 00:40:58.928 "name": "Nvme1", 00:40:58.928 "trtype": "tcp", 00:40:58.928 "traddr": "10.0.0.2", 00:40:58.928 "adrfam": "ipv4", 00:40:58.928 "trsvcid": "4420", 00:40:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:58.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:58.928 "hdgst": false, 00:40:58.928 "ddgst": false 00:40:58.928 }, 00:40:58.928 "method": "bdev_nvme_attach_controller" 00:40:58.928 }' 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:40:58.928 "params": { 00:40:58.928 "name": "Nvme1", 00:40:58.928 "trtype": "tcp", 00:40:58.928 "traddr": "10.0.0.2", 00:40:58.928 "adrfam": "ipv4", 00:40:58.928 "trsvcid": "4420", 00:40:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:58.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:58.928 "hdgst": false, 00:40:58.928 "ddgst": false 00:40:58.928 }, 00:40:58.928 "method": "bdev_nvme_attach_controller" 00:40:58.928 }' 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:40:58.928 "params": { 00:40:58.928 "name": "Nvme1", 00:40:58.928 "trtype": "tcp", 00:40:58.928 "traddr": "10.0.0.2", 00:40:58.928 "adrfam": "ipv4", 00:40:58.928 "trsvcid": "4420", 00:40:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:58.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:58.928 "hdgst": false, 00:40:58.928 "ddgst": false 00:40:58.928 }, 00:40:58.928 "method": "bdev_nvme_attach_controller" 00:40:58.928 }' 00:40:58.928 [2024-12-05 11:24:23.506845] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:58.928 [2024-12-05 11:24:23.506955] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:58.928 [2024-12-05 11:24:23.508190] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:58.928 [2024-12-05 11:24:23.508278] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:58.928 [2024-12-05 11:24:23.509482] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:58.928 [2024-12-05 11:24:23.509566] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:58.928 [2024-12-05 11:24:23.541208] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:58.928 [2024-12-05 11:24:23.541295] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:58.928 11:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 104319 00:40:59.217 [2024-12-05 11:24:23.731778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.217 [2024-12-05 11:24:23.784542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:59.217 [2024-12-05 11:24:23.845632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.475 [2024-12-05 11:24:23.909599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.475 [2024-12-05 11:24:23.915736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:59.475 [2024-12-05 11:24:23.961957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:40:59.475 Running I/O for 1 seconds... 00:40:59.475 [2024-12-05 11:24:23.981372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.475 [2024-12-05 11:24:24.033235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:59.475 Running I/O for 1 seconds... 00:40:59.475 Running I/O for 1 seconds... 00:40:59.733 Running I/O for 1 seconds... 00:41:00.666 11536.00 IOPS, 45.06 MiB/s 00:41:00.666 Latency(us) 00:41:00.666 [2024-12-05T11:24:25.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:00.666 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:00.666 Nvme1n1 : 1.01 11572.63 45.21 0.00 0.00 11012.83 3620.08 13606.52 00:41:00.666 [2024-12-05T11:24:25.318Z] =================================================================================================================== 00:41:00.666 [2024-12-05T11:24:25.318Z] Total : 11572.63 45.21 0.00 0.00 11012.83 3620.08 13606.52 00:41:00.666 8633.00 IOPS, 33.72 MiB/s [2024-12-05T11:24:25.318Z] 10131.00 IOPS, 39.57 MiB/s 00:41:00.666 Latency(us) 00:41:00.666 [2024-12-05T11:24:25.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:00.666 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:00.666 Nvme1n1 : 1.01 8713.02 34.04 0.00 0.00 14636.86 5149.26 19598.38 00:41:00.666 [2024-12-05T11:24:25.318Z] =================================================================================================================== 00:41:00.666 [2024-12-05T11:24:25.318Z] Total : 8713.02 34.04 0.00 0.00 14636.86 5149.26 19598.38 00:41:00.666 00:41:00.666 Latency(us) 00:41:00.666 [2024-12-05T11:24:25.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:00.666 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:00.666 Nvme1n1 : 1.01 10242.98 40.01 0.00 0.00 12463.58 2559.02 18599.74 00:41:00.666 [2024-12-05T11:24:25.318Z] =================================================================================================================== 00:41:00.666 [2024-12-05T11:24:25.318Z] Total : 10242.98 40.01 0.00 0.00 12463.58 2559.02 18599.74 00:41:00.666 224568.00 IOPS, 877.22 MiB/s 00:41:00.666 Latency(us) 00:41:00.666 [2024-12-05T11:24:25.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:00.666 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:00.666 Nvme1n1 : 1.00 224213.89 875.84 0.00 0.00 567.94 269.17 1552.58 00:41:00.666 [2024-12-05T11:24:25.318Z] =================================================================================================================== 00:41:00.666 [2024-12-05T11:24:25.318Z] Total : 224213.89 875.84 0.00 0.00 567.94 269.17 1552.58 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 104321 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 104323 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 104325 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:41:00.666 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:41:00.925 rmmod nvme_tcp 00:41:00.925 rmmod nvme_fabrics 00:41:00.925 rmmod nvme_keyring 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 104260 ']' 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 104260 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 104260 ']' 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 104260 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104260 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:00.925 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:00.925 killing process with pid 104260 00:41:00.926 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104260' 00:41:00.926 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 104260 00:41:00.926 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 104260 00:41:01.184 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:41:01.184 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:41:01.184 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:41:01.184 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:41:01.184 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:41:01.185 00:41:01.185 real 0m4.365s 00:41:01.185 user 0m12.527s 00:41:01.185 sys 0m3.176s 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.185 ************************************ 00:41:01.185 END TEST nvmf_bdev_io_wait 00:41:01.185 ************************************ 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:01.185 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:01.445 ************************************ 00:41:01.445 START TEST nvmf_queue_depth 00:41:01.445 ************************************ 00:41:01.445 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:01.445 * Looking for test storage... 00:41:01.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:01.445 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:01.445 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:01.445 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:01.445 11:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:01.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.445 --rc genhtml_branch_coverage=1 00:41:01.445 --rc genhtml_function_coverage=1 00:41:01.445 --rc genhtml_legend=1 00:41:01.445 --rc geninfo_all_blocks=1 00:41:01.445 --rc geninfo_unexecuted_blocks=1 00:41:01.445 00:41:01.445 ' 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:01.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.445 --rc genhtml_branch_coverage=1 00:41:01.445 --rc genhtml_function_coverage=1 00:41:01.445 --rc genhtml_legend=1 00:41:01.445 --rc geninfo_all_blocks=1 00:41:01.445 --rc geninfo_unexecuted_blocks=1 00:41:01.445 00:41:01.445 ' 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:01.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.445 --rc genhtml_branch_coverage=1 00:41:01.445 --rc genhtml_function_coverage=1 00:41:01.445 --rc genhtml_legend=1 00:41:01.445 --rc geninfo_all_blocks=1 00:41:01.445 --rc geninfo_unexecuted_blocks=1 00:41:01.445 00:41:01.445 ' 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:01.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.445 --rc genhtml_branch_coverage=1 00:41:01.445 --rc genhtml_function_coverage=1 00:41:01.445 --rc genhtml_legend=1 00:41:01.445 --rc geninfo_all_blocks=1 00:41:01.445 --rc geninfo_unexecuted_blocks=1 00:41:01.445 00:41:01.445 ' 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.445 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@280 -- # nvmf_veth_init 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@223 -- # create_target_ns 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # create_main_bridge 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@105 -- # delete_main_bridge 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:01.446 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator0 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:41:01.447 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target0 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0 up 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target0 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:41:01.707 10.0.0.1 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:01.707 10.0.0.2 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator0 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:41:01.707 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target0_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator1 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target1 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1 up 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target1 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772163 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:41:01.708 10.0.0.3 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772164 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:41:01.708 10.0.0.4 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator1 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.708 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target1_br 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 2 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:01.709 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:01.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:01.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:41:01.969 00:41:01.969 --- 10.0.0.1 ping statistics --- 00:41:01.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.969 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:01.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:01.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:41:01.969 00:41:01.969 --- 10.0.0.2 ping statistics --- 00:41:01.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.969 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:41:01.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:01.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:41:01.969 00:41:01.969 --- 10.0.0.3 ping statistics --- 00:41:01.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.969 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:41:01.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:01.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:41:01.969 00:41:01.969 --- 10.0.0.4 ping statistics --- 00:41:01.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:01.969 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # return 0 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:41:01.969 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:41:01.970 ' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=104614 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 104614 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 104614 ']' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:01.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:01.970 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:01.970 [2024-12-05 11:24:26.553474] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:01.970 [2024-12-05 11:24:26.554382] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:41:01.970 [2024-12-05 11:24:26.554433] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:02.242 [2024-12-05 11:24:26.711875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.242 [2024-12-05 11:24:26.767375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:02.242 [2024-12-05 11:24:26.767445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:02.242 [2024-12-05 11:24:26.767461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:02.242 [2024-12-05 11:24:26.767474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:02.242 [2024-12-05 11:24:26.767485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:02.242 [2024-12-05 11:24:26.767847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:02.242 [2024-12-05 11:24:26.849433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:02.242 [2024-12-05 11:24:26.849811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:02.242 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:02.242 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:02.242 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:41:02.242 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:02.242 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:02.500 [2024-12-05 11:24:26.948636] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:02.500 Malloc0 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.500 11:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:02.500 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.500 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:02.500 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.500 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:02.501 [2024-12-05 11:24:27.012691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=104645 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 104645 /var/tmp/bdevperf.sock 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 104645 ']' 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:02.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:02.501 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:02.501 [2024-12-05 11:24:27.094069] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:41:02.501 [2024-12-05 11:24:27.094217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104645 ] 00:41:02.759 [2024-12-05 11:24:27.254064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.759 [2024-12-05 11:24:27.298794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.759 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:02.759 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:02.759 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:02.759 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.759 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:03.017 NVMe0n1 00:41:03.017 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:03.017 11:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:03.017 Running I/O for 10 seconds... 00:41:05.327 10142.00 IOPS, 39.62 MiB/s [2024-12-05T11:24:30.915Z] 10249.00 IOPS, 40.04 MiB/s [2024-12-05T11:24:31.862Z] 9920.67 IOPS, 38.75 MiB/s [2024-12-05T11:24:32.842Z] 9811.00 IOPS, 38.32 MiB/s [2024-12-05T11:24:33.776Z] 9811.60 IOPS, 38.33 MiB/s [2024-12-05T11:24:34.711Z] 9800.17 IOPS, 38.28 MiB/s [2024-12-05T11:24:35.654Z] 9819.29 IOPS, 38.36 MiB/s [2024-12-05T11:24:37.028Z] 10084.12 IOPS, 39.39 MiB/s [2024-12-05T11:24:37.963Z] 10277.67 IOPS, 40.15 MiB/s [2024-12-05T11:24:37.963Z] 10385.10 IOPS, 40.57 MiB/s 00:41:13.311 Latency(us) 00:41:13.311 [2024-12-05T11:24:37.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:13.311 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:13.311 Verification LBA range: start 0x0 length 0x4000 00:41:13.311 NVMe0n1 : 10.07 10416.72 40.69 0.00 0.00 97927.97 21346.01 76895.57 00:41:13.311 [2024-12-05T11:24:37.963Z] =================================================================================================================== 00:41:13.311 [2024-12-05T11:24:37.963Z] Total : 10416.72 40.69 0.00 0.00 97927.97 21346.01 76895.57 00:41:13.311 { 00:41:13.311 "results": [ 00:41:13.311 { 00:41:13.311 "job": "NVMe0n1", 00:41:13.311 "core_mask": "0x1", 00:41:13.311 "workload": "verify", 00:41:13.311 "status": "finished", 00:41:13.311 "verify_range": { 00:41:13.311 "start": 0, 00:41:13.311 "length": 16384 00:41:13.311 }, 00:41:13.311 "queue_depth": 1024, 00:41:13.311 "io_size": 4096, 00:41:13.311 "runtime": 10.067464, 00:41:13.311 "iops": 10416.724609097188, 00:41:13.311 "mibps": 40.69033050428589, 00:41:13.311 "io_failed": 0, 00:41:13.311 "io_timeout": 0, 00:41:13.311 "avg_latency_us": 97927.9718483928, 00:41:13.311 "min_latency_us": 21346.01142857143, 00:41:13.311 "max_latency_us": 76895.57333333333 00:41:13.311 } 00:41:13.311 ], 00:41:13.311 "core_count": 1 00:41:13.311 } 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 104645 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 104645 ']' 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 104645 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104645 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:13.311 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:13.312 killing process with pid 104645 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104645' 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 104645 00:41:13.312 Received shutdown signal, test time was about 10.000000 seconds 00:41:13.312 00:41:13.312 Latency(us) 00:41:13.312 [2024-12-05T11:24:37.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:13.312 [2024-12-05T11:24:37.964Z] =================================================================================================================== 00:41:13.312 [2024-12-05T11:24:37.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 104645 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:41:13.312 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:41:13.312 rmmod nvme_tcp 00:41:13.571 rmmod nvme_fabrics 00:41:13.571 rmmod nvme_keyring 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 104614 ']' 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 104614 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 104614 ']' 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 104614 00:41:13.571 11:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:13.571 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:13.571 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104614 00:41:13.571 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:13.571 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:13.571 killing process with pid 104614 00:41:13.571 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104614' 00:41:13.571 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 104614 00:41:13.571 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 104614 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:41:13.831 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:41:14.090 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:14.090 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:41:14.091 00:41:14.091 real 0m12.710s 00:41:14.091 user 0m20.247s 00:41:14.091 sys 0m3.114s 00:41:14.091 ************************************ 00:41:14.091 END TEST nvmf_queue_depth 00:41:14.091 ************************************ 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:14.091 ************************************ 00:41:14.091 START TEST nvmf_target_multipath 00:41:14.091 ************************************ 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:14.091 * Looking for test storage... 00:41:14.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:14.091 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:14.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.352 --rc genhtml_branch_coverage=1 00:41:14.352 --rc genhtml_function_coverage=1 00:41:14.352 --rc genhtml_legend=1 00:41:14.352 --rc geninfo_all_blocks=1 00:41:14.352 --rc geninfo_unexecuted_blocks=1 00:41:14.352 00:41:14.352 ' 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:14.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.352 --rc genhtml_branch_coverage=1 00:41:14.352 --rc genhtml_function_coverage=1 00:41:14.352 --rc genhtml_legend=1 00:41:14.352 --rc geninfo_all_blocks=1 00:41:14.352 --rc geninfo_unexecuted_blocks=1 00:41:14.352 00:41:14.352 ' 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:14.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.352 --rc genhtml_branch_coverage=1 00:41:14.352 --rc genhtml_function_coverage=1 00:41:14.352 --rc genhtml_legend=1 00:41:14.352 --rc geninfo_all_blocks=1 00:41:14.352 --rc geninfo_unexecuted_blocks=1 00:41:14.352 00:41:14.352 ' 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:14.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.352 --rc genhtml_branch_coverage=1 00:41:14.352 --rc genhtml_function_coverage=1 00:41:14.352 --rc genhtml_legend=1 00:41:14.352 --rc geninfo_all_blocks=1 00:41:14.352 --rc geninfo_unexecuted_blocks=1 00:41:14.352 00:41:14.352 ' 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:14.352 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:41:14.353 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:14.354 10.0.0.1 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:14.354 10.0.0.2 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:41:14.354 11:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:41:14.631 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:41:14.632 10.0.0.3 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:41:14.632 10.0.0.4 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:41:14.632 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:14.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:14.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:41:14.633 00:41:14.633 --- 10.0.0.1 ping statistics --- 00:41:14.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.633 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:14.633 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:14.634 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:14.634 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:14.634 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:14.634 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:14.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:14.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:41:14.939 00:41:14.939 --- 10.0.0.2 ping statistics --- 00:41:14.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.939 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:41:14.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:14.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:41:14.939 00:41:14.939 --- 10.0.0.3 ping statistics --- 00:41:14.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.939 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:41:14.939 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:41:14.940 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:14.940 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.134 ms 00:41:14.940 00:41:14.940 --- 10.0.0.4 ping statistics --- 00:41:14.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.940 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # return 0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:14.940 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:41:14.941 ' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # nvmfpid=105015 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@329 -- # waitforlisten 105015 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 105015 ']' 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:14.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:14.941 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:14.941 [2024-12-05 11:24:39.521894] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:14.941 [2024-12-05 11:24:39.523658] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:41:14.941 [2024-12-05 11:24:39.523877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:15.200 [2024-12-05 11:24:39.688788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:15.200 [2024-12-05 11:24:39.754951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:15.200 [2024-12-05 11:24:39.755242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:15.200 [2024-12-05 11:24:39.755452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:15.200 [2024-12-05 11:24:39.755667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:15.200 [2024-12-05 11:24:39.755744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:15.200 [2024-12-05 11:24:39.756992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:15.200 [2024-12-05 11:24:39.757067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:15.200 [2024-12-05 11:24:39.757152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:15.200 [2024-12-05 11:24:39.757156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.200 [2024-12-05 11:24:39.843188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:15.200 [2024-12-05 11:24:39.843791] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:15.200 [2024-12-05 11:24:39.844626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:15.200 [2024-12-05 11:24:39.844832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:15.200 [2024-12-05 11:24:39.845304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:15.458 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:15.458 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:41:15.458 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:41:15.458 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:15.458 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:15.458 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:15.458 11:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:15.716 [2024-12-05 11:24:40.223362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:15.716 11:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:41:15.974 Malloc0 00:41:15.974 11:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:41:16.230 11:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:16.796 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:16.796 [2024-12-05 11:24:41.379299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:16.796 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:41:17.054 [2024-12-05 11:24:41.607271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:41:17.054 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:41:17.312 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:41:17.312 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:41:17.312 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:41:17.312 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:17.312 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:17.312 11:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=105139 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:41:19.837 11:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:41:19.837 [global] 00:41:19.837 thread=1 00:41:19.837 invalidate=1 00:41:19.837 rw=randrw 00:41:19.837 time_based=1 00:41:19.837 runtime=6 00:41:19.837 ioengine=libaio 00:41:19.837 direct=1 00:41:19.837 bs=4096 00:41:19.837 iodepth=128 00:41:19.837 norandommap=0 00:41:19.837 numjobs=1 00:41:19.837 00:41:19.837 verify_dump=1 00:41:19.837 verify_backlog=512 00:41:19.837 verify_state_save=0 00:41:19.837 do_verify=1 00:41:19.837 verify=crc32c-intel 00:41:19.837 [job0] 00:41:19.837 filename=/dev/nvme0n1 00:41:19.837 Could not set queue depth (nvme0n1) 00:41:19.837 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:19.837 fio-3.35 00:41:19.837 Starting 1 thread 00:41:20.405 11:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:41:20.663 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:20.919 11:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:41:21.853 11:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:41:21.853 11:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:21.853 11:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:21.853 11:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:41:22.419 11:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:22.677 11:24:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:41:23.612 11:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:41:23.612 11:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:23.612 11:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:23.612 11:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 105139 00:41:26.140 00:41:26.140 job0: (groupid=0, jobs=1): err= 0: pid=105165: Thu Dec 5 11:24:50 2024 00:41:26.140 read: IOPS=13.3k, BW=51.9MiB/s (54.4MB/s)(312MiB/6005msec) 00:41:26.140 slat (usec): min=3, max=7243, avg=42.58, stdev=210.53 00:41:26.140 clat (usec): min=304, max=49801, avg=6540.07, stdev=2057.67 00:41:26.140 lat (usec): min=314, max=49816, avg=6582.65, stdev=2064.43 00:41:26.140 clat percentiles (usec): 00:41:26.141 | 1.00th=[ 3818], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5669], 00:41:26.141 | 30.00th=[ 5932], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6587], 00:41:26.141 | 70.00th=[ 6783], 80.00th=[ 7177], 90.00th=[ 7963], 95.00th=[ 8848], 00:41:26.141 | 99.00th=[10552], 99.50th=[11600], 99.90th=[44303], 99.95th=[49021], 00:41:26.141 | 99.99th=[49546] 00:41:26.141 bw ( KiB/s): min=11680, max=33976, per=51.55%, avg=27390.64, stdev=7318.75, samples=11 00:41:26.141 iops : min= 2920, max= 8494, avg=6847.64, stdev=1829.69, samples=11 00:41:26.141 write: IOPS=7876, BW=30.8MiB/s (32.3MB/s)(161MiB/5224msec); 0 zone resets 00:41:26.141 slat (usec): min=4, max=4861, avg=52.72, stdev=114.99 00:41:26.141 clat (usec): min=262, max=15381, avg=5820.82, stdev=985.82 00:41:26.141 lat (usec): min=292, max=15604, avg=5873.54, stdev=990.41 00:41:26.141 clat percentiles (usec): 00:41:26.141 | 1.00th=[ 3097], 5.00th=[ 4178], 10.00th=[ 4883], 20.00th=[ 5276], 00:41:26.141 | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:41:26.141 | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6718], 95.00th=[ 7111], 00:41:26.141 | 99.00th=[ 9110], 99.50th=[10290], 99.90th=[11994], 99.95th=[13304], 00:41:26.141 | 99.99th=[15270] 00:41:26.141 bw ( KiB/s): min=12263, max=34736, per=86.82%, avg=27355.00, stdev=7126.17, samples=11 00:41:26.141 iops : min= 3065, max= 8684, avg=6838.64, stdev=1781.71, samples=11 00:41:26.141 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:41:26.141 lat (msec) : 2=0.06%, 4=2.34%, 10=96.25%, 20=1.17%, 50=0.14% 00:41:26.141 cpu : usr=5.38%, sys=24.95%, ctx=9716, majf=0, minf=114 00:41:26.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:41:26.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:26.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:26.141 issued rwts: total=79761,41148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:26.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:26.141 00:41:26.141 Run status group 0 (all jobs): 00:41:26.141 READ: bw=51.9MiB/s (54.4MB/s), 51.9MiB/s-51.9MiB/s (54.4MB/s-54.4MB/s), io=312MiB (327MB), run=6005-6005msec 00:41:26.141 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=161MiB (169MB), run=5224-5224msec 00:41:26.141 00:41:26.141 Disk stats (read/write): 00:41:26.141 nvme0n1: ios=78875/40214, merge=0/0, ticks=472082/219514, in_queue=691596, util=98.55% 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:41:26.141 11:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:41:27.072 11:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:41:27.072 11:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:27.072 11:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:41:27.072 11:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:41:27.072 11:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=105291 00:41:27.072 11:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:41:27.072 11:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:41:27.072 [global] 00:41:27.072 thread=1 00:41:27.072 invalidate=1 00:41:27.072 rw=randrw 00:41:27.072 time_based=1 00:41:27.072 runtime=6 00:41:27.072 ioengine=libaio 00:41:27.072 direct=1 00:41:27.072 bs=4096 00:41:27.072 iodepth=128 00:41:27.072 norandommap=0 00:41:27.072 numjobs=1 00:41:27.072 00:41:27.072 verify_dump=1 00:41:27.072 verify_backlog=512 00:41:27.072 verify_state_save=0 00:41:27.072 do_verify=1 00:41:27.072 verify=crc32c-intel 00:41:27.072 [job0] 00:41:27.072 filename=/dev/nvme0n1 00:41:27.072 Could not set queue depth (nvme0n1) 00:41:27.330 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:27.330 fio-3.35 00:41:27.330 Starting 1 thread 00:41:28.267 11:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:41:28.556 11:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:28.813 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:28.814 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:28.814 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:28.814 11:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:41:29.742 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:41:29.742 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:29.742 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:29.742 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:41:30.307 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:30.308 11:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:41:31.682 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:41:31.682 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:41:31.682 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:41:31.682 11:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 105291 00:41:33.585 00:41:33.585 job0: (groupid=0, jobs=1): err= 0: pid=105312: Thu Dec 5 11:24:57 2024 00:41:33.585 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(322MiB/6003msec) 00:41:33.585 slat (usec): min=3, max=7872, avg=36.69, stdev=195.06 00:41:33.585 clat (usec): min=309, max=15534, avg=6332.86, stdev=1387.35 00:41:33.585 lat (usec): min=323, max=15543, avg=6369.56, stdev=1401.96 00:41:33.585 clat percentiles (usec): 00:41:33.585 | 1.00th=[ 3097], 5.00th=[ 4080], 10.00th=[ 4621], 20.00th=[ 5276], 00:41:33.585 | 30.00th=[ 5800], 40.00th=[ 6128], 50.00th=[ 6390], 60.00th=[ 6587], 00:41:33.585 | 70.00th=[ 6849], 80.00th=[ 7177], 90.00th=[ 7898], 95.00th=[ 8717], 00:41:33.585 | 99.00th=[10290], 99.50th=[10814], 99.90th=[12780], 99.95th=[14091], 00:41:33.585 | 99.99th=[14484] 00:41:33.585 bw ( KiB/s): min=13176, max=43840, per=50.78%, avg=27900.91, stdev=9995.94, samples=11 00:41:33.585 iops : min= 3294, max=10960, avg=6975.18, stdev=2498.96, samples=11 00:41:33.585 write: IOPS=8125, BW=31.7MiB/s (33.3MB/s)(166MiB/5228msec); 0 zone resets 00:41:33.585 slat (usec): min=4, max=4317, avg=46.67, stdev=105.64 00:41:33.585 clat (usec): min=378, max=14516, avg=5636.10, stdev=1283.26 00:41:33.585 lat (usec): min=417, max=14688, avg=5682.77, stdev=1295.27 00:41:33.585 clat percentiles (usec): 00:41:33.585 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3785], 20.00th=[ 4424], 00:41:33.585 | 30.00th=[ 5145], 40.00th=[ 5669], 50.00th=[ 5932], 60.00th=[ 6128], 00:41:33.585 | 70.00th=[ 6325], 80.00th=[ 6521], 90.00th=[ 6849], 95.00th=[ 7242], 00:41:33.585 | 99.00th=[ 9110], 99.50th=[ 9896], 99.90th=[10945], 99.95th=[13304], 00:41:33.585 | 99.99th=[14091] 00:41:33.585 bw ( KiB/s): min=13816, max=44168, per=85.87%, avg=27908.45, stdev=9810.96, samples=11 00:41:33.585 iops : min= 3454, max=11042, avg=6977.09, stdev=2452.73, samples=11 00:41:33.585 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.03% 00:41:33.585 lat (msec) : 2=0.12%, 4=7.20%, 10=91.57%, 20=1.05% 00:41:33.585 cpu : usr=5.36%, sys=24.44%, ctx=10093, majf=0, minf=102 00:41:33.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:41:33.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:33.586 issued rwts: total=82451,42480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:33.586 00:41:33.586 Run status group 0 (all jobs): 00:41:33.586 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=322MiB (338MB), run=6003-6003msec 00:41:33.586 WRITE: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=166MiB (174MB), run=5228-5228msec 00:41:33.586 00:41:33.586 Disk stats (read/write): 00:41:33.586 nvme0n1: ios=81701/41530, merge=0/0, ticks=482830/220650, in_queue=703480, util=98.65% 00:41:33.586 11:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:33.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:41:33.586 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:41:33.844 rmmod nvme_tcp 00:41:33.844 rmmod nvme_fabrics 00:41:33.844 rmmod nvme_keyring 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n 105015 ']' 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@337 -- # killprocess 105015 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 105015 ']' 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 105015 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105015 00:41:33.844 killing process with pid 105015 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105015' 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 105015 00:41:33.844 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 105015 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:41:34.102 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:41:34.361 ************************************ 00:41:34.361 END TEST nvmf_target_multipath 00:41:34.361 ************************************ 00:41:34.361 00:41:34.361 real 0m20.250s 00:41:34.361 user 1m6.995s 00:41:34.361 sys 0m12.170s 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:34.361 ************************************ 00:41:34.361 START TEST nvmf_zcopy 00:41:34.361 ************************************ 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:34.361 * Looking for test storage... 00:41:34.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:41:34.361 11:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:34.623 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.624 --rc genhtml_branch_coverage=1 00:41:34.624 --rc genhtml_function_coverage=1 00:41:34.624 --rc genhtml_legend=1 00:41:34.624 --rc geninfo_all_blocks=1 00:41:34.624 --rc geninfo_unexecuted_blocks=1 00:41:34.624 00:41:34.624 ' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.624 --rc genhtml_branch_coverage=1 00:41:34.624 --rc genhtml_function_coverage=1 00:41:34.624 --rc genhtml_legend=1 00:41:34.624 --rc geninfo_all_blocks=1 00:41:34.624 --rc geninfo_unexecuted_blocks=1 00:41:34.624 00:41:34.624 ' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.624 --rc genhtml_branch_coverage=1 00:41:34.624 --rc genhtml_function_coverage=1 00:41:34.624 --rc genhtml_legend=1 00:41:34.624 --rc geninfo_all_blocks=1 00:41:34.624 --rc geninfo_unexecuted_blocks=1 00:41:34.624 00:41:34.624 ' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:34.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.624 --rc genhtml_branch_coverage=1 00:41:34.624 --rc genhtml_function_coverage=1 00:41:34.624 --rc genhtml_legend=1 00:41:34.624 --rc geninfo_all_blocks=1 00:41:34.624 --rc geninfo_unexecuted_blocks=1 00:41:34.624 00:41:34.624 ' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@280 -- # nvmf_veth_init 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@223 -- # create_target_ns 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # create_main_bridge 00:41:34.624 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@105 -- # delete_main_bridge 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator0 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target0 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0 up 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target0 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:41:34.625 10.0.0.1 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:41:34.625 10.0.0.2 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator0 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:41:34.625 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.626 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:41:34.626 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:41:34.626 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:41:34.626 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:41:34.626 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:41:34.626 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target0_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator1 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target1 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1 up 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target1_br 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target1 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772163 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:41:34.911 10.0.0.3 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772164 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:41:34.911 10.0.0.4 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator1 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.911 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target1_br 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 2 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:34.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:34.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:41:34.912 00:41:34.912 --- 10.0.0.1 ping statistics --- 00:41:34.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:34.912 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:34.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:34.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:41:34.912 00:41:34.912 --- 10.0.0.2 ping statistics --- 00:41:34.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:34.912 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:41:34.912 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:41:34.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:34.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:41:34.913 00:41:34.913 --- 10.0.0.3 ping statistics --- 00:41:34.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:34.913 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:41:34.913 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:34.913 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:41:34.913 00:41:34.913 --- 10.0.0.4 ping statistics --- 00:41:34.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:34.913 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # return 0 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:41:34.913 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:41:35.178 ' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=105646 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 105646 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 105646 ']' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:35.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:35.178 11:24:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.178 [2024-12-05 11:24:59.695138] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:35.178 [2024-12-05 11:24:59.696501] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:41:35.178 [2024-12-05 11:24:59.696573] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:35.437 [2024-12-05 11:24:59.852061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:35.437 [2024-12-05 11:24:59.913465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:35.437 [2024-12-05 11:24:59.913534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:35.437 [2024-12-05 11:24:59.913550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:35.437 [2024-12-05 11:24:59.913563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:35.437 [2024-12-05 11:24:59.913574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:35.437 [2024-12-05 11:24:59.913938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:35.437 [2024-12-05 11:24:59.994226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:35.437 [2024-12-05 11:24:59.994514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.437 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.438 [2024-12-05 11:25:00.086773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.697 [2024-12-05 11:25:00.107097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.697 malloc0 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:35.697 { 00:41:35.697 "params": { 00:41:35.697 "name": "Nvme$subsystem", 00:41:35.697 "trtype": "$TEST_TRANSPORT", 00:41:35.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:35.697 "adrfam": "ipv4", 00:41:35.697 "trsvcid": "$NVMF_PORT", 00:41:35.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:35.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:35.697 "hdgst": ${hdgst:-false}, 00:41:35.697 "ddgst": ${ddgst:-false} 00:41:35.697 }, 00:41:35.697 "method": "bdev_nvme_attach_controller" 00:41:35.697 } 00:41:35.697 EOF 00:41:35.697 )") 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:41:35.697 11:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:35.697 "params": { 00:41:35.697 "name": "Nvme1", 00:41:35.697 "trtype": "tcp", 00:41:35.697 "traddr": "10.0.0.2", 00:41:35.697 "adrfam": "ipv4", 00:41:35.697 "trsvcid": "4420", 00:41:35.697 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:35.697 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:35.697 "hdgst": false, 00:41:35.697 "ddgst": false 00:41:35.697 }, 00:41:35.697 "method": "bdev_nvme_attach_controller" 00:41:35.697 }' 00:41:35.697 [2024-12-05 11:25:00.211953] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:41:35.697 [2024-12-05 11:25:00.212073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105678 ] 00:41:35.956 [2024-12-05 11:25:00.371721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:35.956 [2024-12-05 11:25:00.454178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:36.215 Running I/O for 10 seconds... 00:41:38.082 6729.00 IOPS, 52.57 MiB/s [2024-12-05T11:25:03.667Z] 6497.50 IOPS, 50.76 MiB/s [2024-12-05T11:25:05.039Z] 6757.33 IOPS, 52.79 MiB/s [2024-12-05T11:25:05.968Z] 6873.50 IOPS, 53.70 MiB/s [2024-12-05T11:25:06.902Z] 6844.00 IOPS, 53.47 MiB/s [2024-12-05T11:25:07.836Z] 6823.50 IOPS, 53.31 MiB/s [2024-12-05T11:25:08.770Z] 6940.00 IOPS, 54.22 MiB/s [2024-12-05T11:25:09.703Z] 6972.00 IOPS, 54.47 MiB/s [2024-12-05T11:25:10.638Z] 7013.33 IOPS, 54.79 MiB/s [2024-12-05T11:25:10.897Z] 7043.80 IOPS, 55.03 MiB/s 00:41:46.245 Latency(us) 00:41:46.245 [2024-12-05T11:25:10.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:46.245 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:46.245 Verification LBA range: start 0x0 length 0x1000 00:41:46.245 Nvme1n1 : 10.01 7046.01 55.05 0.00 0.00 18112.32 1669.61 27462.70 00:41:46.245 [2024-12-05T11:25:10.897Z] =================================================================================================================== 00:41:46.245 [2024-12-05T11:25:10.897Z] Total : 7046.01 55.05 0.00 0.00 18112.32 1669.61 27462.70 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=105795 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:46.505 { 00:41:46.505 "params": { 00:41:46.505 "name": "Nvme$subsystem", 00:41:46.505 "trtype": "$TEST_TRANSPORT", 00:41:46.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:46.505 "adrfam": "ipv4", 00:41:46.505 "trsvcid": "$NVMF_PORT", 00:41:46.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:46.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:46.505 "hdgst": ${hdgst:-false}, 00:41:46.505 "ddgst": ${ddgst:-false} 00:41:46.505 }, 00:41:46.505 "method": "bdev_nvme_attach_controller" 00:41:46.505 } 00:41:46.505 EOF 00:41:46.505 )") 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:41:46.505 [2024-12-05 11:25:10.934568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.505 [2024-12-05 11:25:10.934609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:41:46.505 2024/12/05 11:25:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:41:46.505 11:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:46.505 "params": { 00:41:46.505 "name": "Nvme1", 00:41:46.505 "trtype": "tcp", 00:41:46.505 "traddr": "10.0.0.2", 00:41:46.505 "adrfam": "ipv4", 00:41:46.505 "trsvcid": "4420", 00:41:46.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:46.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:46.505 "hdgst": false, 00:41:46.505 "ddgst": false 00:41:46.505 }, 00:41:46.505 "method": "bdev_nvme_attach_controller" 00:41:46.505 }' 00:41:46.505 [2024-12-05 11:25:10.946509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.505 [2024-12-05 11:25:10.946534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.505 2024/12/05 11:25:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.505 [2024-12-05 11:25:10.958497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.505 [2024-12-05 11:25:10.958518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.505 2024/12/05 11:25:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.505 [2024-12-05 11:25:10.970495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.505 [2024-12-05 11:25:10.970516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.505 2024/12/05 11:25:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:10.982496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:10.982515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:10.990356] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:41:46.506 [2024-12-05 11:25:10.990445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105795 ] 00:41:46.506 [2024-12-05 11:25:10.994506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:10.994528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.006486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.006504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.018494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.018516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.030486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.030506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.042488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.042507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.054486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.054504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.066487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.066505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.078506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.078527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.090521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.090542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.102494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.102515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.114515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.114545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.126509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.126528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.137230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.506 [2024-12-05 11:25:11.138499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.138523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.506 [2024-12-05 11:25:11.150501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.506 [2024-12-05 11:25:11.150522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.506 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.162487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.162506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.174487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.174506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.186486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.186504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.198495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.198516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.204929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.766 [2024-12-05 11:25:11.210508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.210533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.222499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.222521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.234511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.234531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.246499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.246516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.258486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.258504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.270486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.270503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.282494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.282512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.294488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.294506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.306500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.306521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.318488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.318505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.330496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.330517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.766 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.766 [2024-12-05 11:25:11.342493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.766 [2024-12-05 11:25:11.342513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.767 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.767 [2024-12-05 11:25:11.354487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.767 [2024-12-05 11:25:11.354505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.767 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.767 [2024-12-05 11:25:11.366512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.767 [2024-12-05 11:25:11.366542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.767 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.767 [2024-12-05 11:25:11.378563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.767 [2024-12-05 11:25:11.378606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.767 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.767 [2024-12-05 11:25:11.390525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.767 [2024-12-05 11:25:11.390555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.767 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.767 [2024-12-05 11:25:11.402511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.767 [2024-12-05 11:25:11.402539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.767 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:46.767 [2024-12-05 11:25:11.414579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:46.767 [2024-12-05 11:25:11.414612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:46.767 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.426533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.426564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 Running I/O for 5 seconds... 00:41:47.026 [2024-12-05 11:25:11.446221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.446255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.464421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.464456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.485089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.485125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.501794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.501827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.515585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.515628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.533547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.533580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.547844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.547874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.564145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.564176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.581599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.581631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.596627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.596660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.612716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.612749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.629605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.629636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.026 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.026 [2024-12-05 11:25:11.647537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.026 [2024-12-05 11:25:11.647567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.027 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.027 [2024-12-05 11:25:11.664255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.027 [2024-12-05 11:25:11.664285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.027 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.680256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.680288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.697711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.697741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.709663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.709693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.725274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.725305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.741469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.741501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.756822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.756854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.773540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.773572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.789265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.789295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.807459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.807492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.825464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.825496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.839621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.839652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.856275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.856307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.872656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.872688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.889322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.889353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.905123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.905154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.921012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.921043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.285 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.285 [2024-12-05 11:25:11.936138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.285 [2024-12-05 11:25:11.936167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:11.953359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:11.953389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:11.968280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:11.968311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:11.985616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:11.985646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:11.997940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:11.997969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.013539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.013570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.029451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.029485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.045877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.045911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.063940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.063973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.080898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.080932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.097455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.097488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.113710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.113741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.127937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.127968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.145247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.145291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.544 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.544 [2024-12-05 11:25:12.159129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.544 [2024-12-05 11:25:12.159159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.545 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.545 [2024-12-05 11:25:12.171045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.545 [2024-12-05 11:25:12.171075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.545 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.545 [2024-12-05 11:25:12.186807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.545 [2024-12-05 11:25:12.186836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.545 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.203564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.203605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.220246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.220279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.236962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.236994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.251802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.251832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.268474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.268506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.284520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.284553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.300560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.300603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.316475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.316505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.333137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.333168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.347918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.347948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.364563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.364607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.379808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.379838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.396245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.396275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.412214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.802 [2024-12-05 11:25:12.412245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.802 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.802 [2024-12-05 11:25:12.427831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.803 [2024-12-05 11:25:12.427861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.803 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:47.803 14522.00 IOPS, 113.45 MiB/s [2024-12-05T11:25:12.455Z] [2024-12-05 11:25:12.443068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:47.803 [2024-12-05 11:25:12.443098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:47.803 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.460735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.460768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.477090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.477134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.493653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.493683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.509577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.509619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.524336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.524367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.541988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.542022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.560767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.560801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.577673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.577706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.596071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.596104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.613497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.613529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.628783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.628817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.645206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.645242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.659144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.659175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.677027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.677062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.693703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.693735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.060 [2024-12-05 11:25:12.707860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.060 [2024-12-05 11:25:12.707892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.060 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.724819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.724850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.740877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.740908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.757258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.757289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.773396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.773427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.789237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.789267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.805086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.805129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.822048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.822080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.837234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.837267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.853366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.853396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.318 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.318 [2024-12-05 11:25:12.869541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.318 [2024-12-05 11:25:12.869571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.319 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.319 [2024-12-05 11:25:12.885048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.319 [2024-12-05 11:25:12.885080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.319 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.319 [2024-12-05 11:25:12.900266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.319 [2024-12-05 11:25:12.900296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.319 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.319 [2024-12-05 11:25:12.916598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.319 [2024-12-05 11:25:12.916631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.319 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.319 [2024-12-05 11:25:12.931944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.319 [2024-12-05 11:25:12.931974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.319 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.319 [2024-12-05 11:25:12.948854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.319 [2024-12-05 11:25:12.948884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.319 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.319 [2024-12-05 11:25:12.964251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.319 [2024-12-05 11:25:12.964281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.319 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.577 [2024-12-05 11:25:12.980946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:12.980977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:12.997533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:12.997564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.010346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.010378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.027713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.027744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.044338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.044368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.060841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.060872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.076548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.076580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.093078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.093112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.109197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.109228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.126899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.126929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.144300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.144333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.161214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.161246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.179035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.179066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.196709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.196744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.212421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.212457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.578 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.578 [2024-12-05 11:25:13.228794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.578 [2024-12-05 11:25:13.228828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.245603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.245636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.263718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.263749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.281186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.281217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.296418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.296450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.313318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.313350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.326646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.326678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.343845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.343875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.360615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.360647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.376284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.376314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.394045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.394078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.406007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.406039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.421006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.421037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.836 [2024-12-05 11:25:13.438090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.836 [2024-12-05 11:25:13.438120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.836 14443.50 IOPS, 112.84 MiB/s [2024-12-05T11:25:13.488Z] 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.837 [2024-12-05 11:25:13.454998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.837 [2024-12-05 11:25:13.455030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.837 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.837 [2024-12-05 11:25:13.471127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.837 [2024-12-05 11:25:13.471157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:48.837 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:48.837 [2024-12-05 11:25:13.488339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:48.837 [2024-12-05 11:25:13.488372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.505060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.505091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.519624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.519655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.527677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.527706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.544522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.544553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.561496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.561528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.577301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.577335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.595129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.595160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.612324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.612356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.629894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.629926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.643266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.643296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.660900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.660932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.674342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.674375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.688781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.688814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.704549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.704580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.721328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.721361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.094 [2024-12-05 11:25:13.733682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.094 [2024-12-05 11:25:13.733715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.094 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.352 [2024-12-05 11:25:13.749752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.352 [2024-12-05 11:25:13.749784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.352 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.352 [2024-12-05 11:25:13.764608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.352 [2024-12-05 11:25:13.764640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.781277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.781309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.797567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.797608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.815460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.815493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.833204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.833236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.851681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.851714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.870085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.870121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.888655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.888692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.905579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.905621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.919291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.919321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.936503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.936537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.952562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.952606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.969790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.969819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:13.987458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:13.987490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.353 2024/12/05 11:25:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.353 [2024-12-05 11:25:14.004491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.353 [2024-12-05 11:25:14.004523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.610 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.610 [2024-12-05 11:25:14.017823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.610 [2024-12-05 11:25:14.017852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.032197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.032228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.048996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.049026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.064709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.064740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.080991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.081023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.097669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.097699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.110230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.110260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.125353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.125385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.143378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.143409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.160312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.160343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.177674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.177704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.190101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.190130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.205166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.205196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.221073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.221118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.235932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.235968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.611 [2024-12-05 11:25:14.253483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.611 [2024-12-05 11:25:14.253514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.611 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.869 [2024-12-05 11:25:14.267143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.869 [2024-12-05 11:25:14.267175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.286189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.286229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.304794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.304850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.320776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.320831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.337836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.337910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.356331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.356388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.373059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.373113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.388887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.388941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.407215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.407255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.424350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.424399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 14347.33 IOPS, 112.09 MiB/s [2024-12-05T11:25:14.522Z] [2024-12-05 11:25:14.441178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.441226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.460548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.460609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.477583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.477639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.496240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.496292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:49.870 [2024-12-05 11:25:14.513187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:49.870 [2024-12-05 11:25:14.513228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:49.870 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.529010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.529070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.545016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.545060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.561469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.561508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.576831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.576868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.593434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.593472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.607734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.607769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.626036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.626073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.644441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.644476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.660676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.660711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.677439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.677477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.692772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.692805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.709737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.709775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.723644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.723680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.742272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.742310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.759741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.128 [2024-12-05 11:25:14.759780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.128 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.128 [2024-12-05 11:25:14.778193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.129 [2024-12-05 11:25:14.778237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.129 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.386 [2024-12-05 11:25:14.797121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.386 [2024-12-05 11:25:14.797162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.386 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.386 [2024-12-05 11:25:14.813750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.386 [2024-12-05 11:25:14.813785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.386 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.386 [2024-12-05 11:25:14.826702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.386 [2024-12-05 11:25:14.826733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.386 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.386 [2024-12-05 11:25:14.845026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.845062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.858567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.858614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.871068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.871103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.889908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.889949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.908941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.908980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.927899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.927943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.945086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.945132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.961548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.961603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.976403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.976451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:14.994054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:14.994102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:15.013034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:15.013079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.387 [2024-12-05 11:25:15.030361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.387 [2024-12-05 11:25:15.030407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.387 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.049317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.049365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.068982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.069033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.082900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.082948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.102537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.102584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.114402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.114447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.127770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.127807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.145875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.145914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.160284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.160321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.178318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.178362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.190516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.190555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.203205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.203241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.221813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.221849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.240654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.644 [2024-12-05 11:25:15.240691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.644 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.644 [2024-12-05 11:25:15.255191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.645 [2024-12-05 11:25:15.255228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.645 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.645 [2024-12-05 11:25:15.273972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.645 [2024-12-05 11:25:15.274027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.645 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.645 [2024-12-05 11:25:15.293289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.645 [2024-12-05 11:25:15.293343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.645 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.306931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.306968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.325155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.325190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.343507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.343541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.361757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.361791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.380152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.380185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.397858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.397890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.412249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.412282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.429416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.429450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 14108.50 IOPS, 110.22 MiB/s [2024-12-05T11:25:15.555Z] [2024-12-05 11:25:15.446907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.446940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.464784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.464818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.481984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.482037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.499916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.499950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.517459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.517493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.903 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.903 [2024-12-05 11:25:15.529870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.903 [2024-12-05 11:25:15.529900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.904 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:50.904 [2024-12-05 11:25:15.545533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:50.904 [2024-12-05 11:25:15.545564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:50.904 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.161 [2024-12-05 11:25:15.561320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.161 [2024-12-05 11:25:15.561355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.161 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.161 [2024-12-05 11:25:15.577310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.161 [2024-12-05 11:25:15.577343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.161 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.161 [2024-12-05 11:25:15.592913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.161 [2024-12-05 11:25:15.592947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.161 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.161 [2024-12-05 11:25:15.608956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.161 [2024-12-05 11:25:15.608989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.161 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.161 [2024-12-05 11:25:15.625497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.161 [2024-12-05 11:25:15.625533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.639831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.639863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.656463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.656496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.672690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.672725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.689199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.689232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.704149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.704181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.721595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.721637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.739120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.739152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.757073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.757120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.771392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.771431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.788407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.788441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.162 [2024-12-05 11:25:15.804989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.162 [2024-12-05 11:25:15.805023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.162 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.823058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.823101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.840022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.840070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.856686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.856726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.873182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.873227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.889316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.889355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.904937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.904975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.921632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.921672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.937528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.419 [2024-12-05 11:25:15.937566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.419 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.419 [2024-12-05 11:25:15.953341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.420 [2024-12-05 11:25:15.953377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.420 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.420 [2024-12-05 11:25:15.971301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.420 [2024-12-05 11:25:15.971334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.420 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.420 [2024-12-05 11:25:15.989387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.420 [2024-12-05 11:25:15.989420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.420 2024/12/05 11:25:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.420 [2024-12-05 11:25:16.003027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.420 [2024-12-05 11:25:16.003075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.420 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.420 [2024-12-05 11:25:16.020922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.420 [2024-12-05 11:25:16.020954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.420 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.420 [2024-12-05 11:25:16.039550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.420 [2024-12-05 11:25:16.039582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.420 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.420 [2024-12-05 11:25:16.056865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.420 [2024-12-05 11:25:16.056897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.420 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.075569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.676 [2024-12-05 11:25:16.075620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.676 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.093255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.676 [2024-12-05 11:25:16.093289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.676 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.111219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.676 [2024-12-05 11:25:16.111258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.676 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.128468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.676 [2024-12-05 11:25:16.128512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.676 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.145872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.676 [2024-12-05 11:25:16.145919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.676 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.163936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.676 [2024-12-05 11:25:16.163973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.676 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.181451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.676 [2024-12-05 11:25:16.181493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.676 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.676 [2024-12-05 11:25:16.196633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.196672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.213739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.213779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.228803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.228839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.245814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.245850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.263925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.263965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.282509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.282556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.292275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.292322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.307816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.307863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.677 [2024-12-05 11:25:16.326536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.677 [2024-12-05 11:25:16.326584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.677 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.933 [2024-12-05 11:25:16.337773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.933 [2024-12-05 11:25:16.337815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.933 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.933 [2024-12-05 11:25:16.353160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.933 [2024-12-05 11:25:16.353199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.369274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.369310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.384778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.384818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.401682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.401717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.419898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.419932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.436126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.436159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 14043.40 IOPS, 109.71 MiB/s [2024-12-05T11:25:16.586Z] 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 00:41:51.934 Latency(us) 00:41:51.934 [2024-12-05T11:25:16.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:51.934 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:51.934 Nvme1n1 : 5.01 14040.35 109.69 0.00 0.00 9107.23 2574.63 17850.76 00:41:51.934 [2024-12-05T11:25:16.586Z] =================================================================================================================== 00:41:51.934 [2024-12-05T11:25:16.586Z] Total : 14040.35 109.69 0.00 0.00 9107.23 2574.63 17850.76 00:41:51.934 [2024-12-05 11:25:16.446531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.446562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.458515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.458545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.470523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.470553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.482512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.482538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.494509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.494537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.506507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.506532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.518495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.518518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.530507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.530536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.542510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.542540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.554501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.554525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.566499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.566521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:51.934 [2024-12-05 11:25:16.578539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:51.934 [2024-12-05 11:25:16.578566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:51.934 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:52.191 [2024-12-05 11:25:16.590528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:52.191 [2024-12-05 11:25:16.590552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:52.191 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:52.191 [2024-12-05 11:25:16.602494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:52.191 [2024-12-05 11:25:16.602513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:52.191 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:52.191 [2024-12-05 11:25:16.614497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:52.191 [2024-12-05 11:25:16.614518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:52.191 2024/12/05 11:25:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:52.191 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (105795) - No such process 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 105795 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:52.191 delay0 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:52.191 11:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:52.191 [2024-12-05 11:25:16.828672] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:00.305 Initializing NVMe Controllers 00:42:00.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:00.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:00.305 Initialization complete. Launching workers. 00:42:00.305 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 272, failed: 21295 00:42:00.305 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21481, failed to submit 86 00:42:00.305 success 21396, unsuccessful 85, failed 0 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:00.305 rmmod nvme_tcp 00:42:00.305 rmmod nvme_fabrics 00:42:00.305 rmmod nvme_keyring 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 105646 ']' 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 105646 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 105646 ']' 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 105646 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:00.305 11:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105646 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:00.305 killing process with pid 105646 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105646' 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 105646 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 105646 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:00.305 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:42:00.306 00:42:00.306 real 0m25.507s 00:42:00.306 user 0m37.867s 00:42:00.306 sys 0m10.357s 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:00.306 ************************************ 00:42:00.306 END TEST nvmf_zcopy 00:42:00.306 ************************************ 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:00.306 ************************************ 00:42:00.306 START TEST nvmf_nmic 00:42:00.306 ************************************ 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:00.306 * Looking for test storage... 00:42:00.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:00.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.306 --rc genhtml_branch_coverage=1 00:42:00.306 --rc genhtml_function_coverage=1 00:42:00.306 --rc genhtml_legend=1 00:42:00.306 --rc geninfo_all_blocks=1 00:42:00.306 --rc geninfo_unexecuted_blocks=1 00:42:00.306 00:42:00.306 ' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:00.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.306 --rc genhtml_branch_coverage=1 00:42:00.306 --rc genhtml_function_coverage=1 00:42:00.306 --rc genhtml_legend=1 00:42:00.306 --rc geninfo_all_blocks=1 00:42:00.306 --rc geninfo_unexecuted_blocks=1 00:42:00.306 00:42:00.306 ' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:00.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.306 --rc genhtml_branch_coverage=1 00:42:00.306 --rc genhtml_function_coverage=1 00:42:00.306 --rc genhtml_legend=1 00:42:00.306 --rc geninfo_all_blocks=1 00:42:00.306 --rc geninfo_unexecuted_blocks=1 00:42:00.306 00:42:00.306 ' 00:42:00.306 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:00.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.306 --rc genhtml_branch_coverage=1 00:42:00.306 --rc genhtml_function_coverage=1 00:42:00.306 --rc genhtml_legend=1 00:42:00.306 --rc geninfo_all_blocks=1 00:42:00.306 --rc geninfo_unexecuted_blocks=1 00:42:00.306 00:42:00.306 ' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@280 -- # nvmf_veth_init 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@223 -- # create_target_ns 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # create_main_bridge 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@105 -- # delete_main_bridge 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:42:00.307 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0 up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:42:00.308 10.0.0.1 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:42:00.308 10.0.0.2 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target0_br 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:00.308 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator1 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target1 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1 up 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target1_br 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:00.309 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target1 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772163 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:42:00.568 10.0.0.3 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772164 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:42:00.568 11:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:42:00.568 10.0.0.4 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator1 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.568 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target1_br 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 2 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:42:00.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:00.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:42:00.569 00:42:00.569 --- 10.0.0.1 ping statistics --- 00:42:00.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.569 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:42:00.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:00.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:42:00.569 00:42:00.569 --- 10.0.0.2 ping statistics --- 00:42:00.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.569 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:42:00.569 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:42:00.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:00.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:42:00.570 00:42:00.570 --- 10.0.0.3 ping statistics --- 00:42:00.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.570 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:42:00.570 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:00.570 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:42:00.570 00:42:00.570 --- 10.0.0.4 ping statistics --- 00:42:00.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:00.570 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # return 0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:00.570 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:42:00.830 ' 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=106171 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 106171 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 106171 ']' 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:00.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:00.830 11:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:00.830 [2024-12-05 11:25:25.328162] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:00.830 [2024-12-05 11:25:25.329661] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:42:00.830 [2024-12-05 11:25:25.329746] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:01.090 [2024-12-05 11:25:25.490079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:01.090 [2024-12-05 11:25:25.556737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:01.090 [2024-12-05 11:25:25.556809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:01.090 [2024-12-05 11:25:25.556825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:01.090 [2024-12-05 11:25:25.556838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:01.090 [2024-12-05 11:25:25.556850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:01.090 [2024-12-05 11:25:25.557987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.090 [2024-12-05 11:25:25.558063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:01.090 [2024-12-05 11:25:25.558152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:01.090 [2024-12-05 11:25:25.558153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:01.090 [2024-12-05 11:25:25.642699] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:01.090 [2024-12-05 11:25:25.642941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:01.090 [2024-12-05 11:25:25.643581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:01.090 [2024-12-05 11:25:25.643904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:01.090 [2024-12-05 11:25:25.644559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:01.657 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:01.657 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:42:01.657 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:42:01.657 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:01.657 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.915 [2024-12-05 11:25:26.355129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.915 Malloc0 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.915 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.915 [2024-12-05 11:25:26.431282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.916 test case1: single bdev can't be used in multiple subsystems 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.916 [2024-12-05 11:25:26.455013] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:01.916 [2024-12-05 11:25:26.455061] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:01.916 [2024-12-05 11:25:26.455078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:01.916 2024/12/05 11:25:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:42:01.916 request: 00:42:01.916 { 00:42:01.916 "method": "nvmf_subsystem_add_ns", 00:42:01.916 "params": { 00:42:01.916 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:01.916 "namespace": { 00:42:01.916 "bdev_name": "Malloc0", 00:42:01.916 "no_auto_visible": false, 00:42:01.916 "hide_metadata": false 00:42:01.916 } 00:42:01.916 } 00:42:01.916 } 00:42:01.916 Got JSON-RPC error response 00:42:01.916 GoRPCClient: error on JSON-RPC call 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:01.916 Adding namespace failed - expected result. 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:01.916 test case2: host connect to nvmf target in multiple paths 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:01.916 [2024-12-05 11:25:26.467195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:01.916 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:02.174 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:02.174 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:42:02.174 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:02.174 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:02.174 11:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:42:04.074 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:04.074 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:04.074 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:04.074 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:04.074 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:04.074 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:42:04.074 11:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:04.074 [global] 00:42:04.074 thread=1 00:42:04.074 invalidate=1 00:42:04.074 rw=write 00:42:04.074 time_based=1 00:42:04.074 runtime=1 00:42:04.074 ioengine=libaio 00:42:04.074 direct=1 00:42:04.074 bs=4096 00:42:04.074 iodepth=1 00:42:04.074 norandommap=0 00:42:04.074 numjobs=1 00:42:04.074 00:42:04.074 verify_dump=1 00:42:04.074 verify_backlog=512 00:42:04.074 verify_state_save=0 00:42:04.074 do_verify=1 00:42:04.074 verify=crc32c-intel 00:42:04.074 [job0] 00:42:04.074 filename=/dev/nvme0n1 00:42:04.074 Could not set queue depth (nvme0n1) 00:42:04.331 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:04.331 fio-3.35 00:42:04.331 Starting 1 thread 00:42:05.704 00:42:05.704 job0: (groupid=0, jobs=1): err= 0: pid=106275: Thu Dec 5 11:25:29 2024 00:42:05.704 read: IOPS=3517, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1001msec) 00:42:05.704 slat (nsec): min=8866, max=52955, avg=12640.15, stdev=3242.77 00:42:05.704 clat (usec): min=116, max=1008, avg=147.33, stdev=20.48 00:42:05.704 lat (usec): min=126, max=1021, avg=159.97, stdev=21.19 00:42:05.704 clat percentiles (usec): 00:42:05.704 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:42:05.704 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:42:05.704 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:42:05.704 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 265], 99.95th=[ 685], 00:42:05.704 | 99.99th=[ 1012] 00:42:05.704 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:42:05.704 slat (usec): min=14, max=129, avg=17.71, stdev= 4.90 00:42:05.704 clat (usec): min=82, max=291, avg=101.79, stdev= 8.09 00:42:05.704 lat (usec): min=98, max=420, avg=119.50, stdev=10.54 00:42:05.704 clat percentiles (usec): 00:42:05.704 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 97], 00:42:05.704 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 102], 00:42:05.704 | 70.00th=[ 103], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 115], 00:42:05.704 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 178], 99.95th=[ 210], 00:42:05.704 | 99.99th=[ 293] 00:42:05.704 bw ( KiB/s): min=15872, max=15872, per=100.00%, avg=15872.00, stdev= 0.00, samples=1 00:42:05.704 iops : min= 3968, max= 3968, avg=3968.00, stdev= 0.00, samples=1 00:42:05.704 lat (usec) : 100=22.69%, 250=77.24%, 500=0.04%, 750=0.01% 00:42:05.704 lat (msec) : 2=0.01% 00:42:05.704 cpu : usr=1.30%, sys=8.90%, ctx=7105, majf=0, minf=5 00:42:05.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:05.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:05.704 issued rwts: total=3521,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:05.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:05.704 00:42:05.704 Run status group 0 (all jobs): 00:42:05.704 READ: bw=13.7MiB/s (14.4MB/s), 13.7MiB/s-13.7MiB/s (14.4MB/s-14.4MB/s), io=13.8MiB (14.4MB), run=1001-1001msec 00:42:05.704 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:42:05.704 00:42:05.704 Disk stats (read/write): 00:42:05.704 nvme0n1: ios=3122/3263, merge=0/0, ticks=479/364, in_queue=843, util=91.18% 00:42:05.705 11:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:05.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:05.705 rmmod nvme_tcp 00:42:05.705 rmmod nvme_fabrics 00:42:05.705 rmmod nvme_keyring 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 106171 ']' 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 106171 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 106171 ']' 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 106171 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106171 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:05.705 killing process with pid 106171 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106171' 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 106171 00:42:05.705 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 106171 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:42:05.963 00:42:05.963 real 0m6.135s 00:42:05.963 user 0m14.233s 00:42:05.963 sys 0m3.142s 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:05.963 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:05.964 ************************************ 00:42:05.964 END TEST nvmf_nmic 00:42:05.964 ************************************ 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:06.224 ************************************ 00:42:06.224 START TEST nvmf_fio_target 00:42:06.224 ************************************ 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:06.224 * Looking for test storage... 00:42:06.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.224 --rc genhtml_branch_coverage=1 00:42:06.224 --rc genhtml_function_coverage=1 00:42:06.224 --rc genhtml_legend=1 00:42:06.224 --rc geninfo_all_blocks=1 00:42:06.224 --rc geninfo_unexecuted_blocks=1 00:42:06.224 00:42:06.224 ' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.224 --rc genhtml_branch_coverage=1 00:42:06.224 --rc genhtml_function_coverage=1 00:42:06.224 --rc genhtml_legend=1 00:42:06.224 --rc geninfo_all_blocks=1 00:42:06.224 --rc geninfo_unexecuted_blocks=1 00:42:06.224 00:42:06.224 ' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.224 --rc genhtml_branch_coverage=1 00:42:06.224 --rc genhtml_function_coverage=1 00:42:06.224 --rc genhtml_legend=1 00:42:06.224 --rc geninfo_all_blocks=1 00:42:06.224 --rc geninfo_unexecuted_blocks=1 00:42:06.224 00:42:06.224 ' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:06.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.224 --rc genhtml_branch_coverage=1 00:42:06.224 --rc genhtml_function_coverage=1 00:42:06.224 --rc genhtml_legend=1 00:42:06.224 --rc geninfo_all_blocks=1 00:42:06.224 --rc geninfo_unexecuted_blocks=1 00:42:06.224 00:42:06.224 ' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:42:06.224 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@223 -- # create_target_ns 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:06.225 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target0 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:42:06.544 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:42:06.545 10.0.0.1 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:42:06.545 10.0.0.2 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:42:06.545 11:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target1 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772163 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:42:06.545 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:42:06.546 10.0.0.3 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772164 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:42:06.546 10.0.0.4 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:42:06.546 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:42:06.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:06.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:42:06.826 00:42:06.826 --- 10.0.0.1 ping statistics --- 00:42:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.826 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:42:06.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:06.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:42:06.826 00:42:06.826 --- 10.0.0.2 ping statistics --- 00:42:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.826 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:42:06.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:06.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:42:06.826 00:42:06.826 --- 10.0.0.3 ping statistics --- 00:42:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.826 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:42:06.826 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:06.826 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:42:06.826 00:42:06.826 --- 10.0.0.4 ping statistics --- 00:42:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.826 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # return 0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:06.826 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:42:06.827 ' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=106511 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 106511 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 106511 ']' 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:06.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:06.827 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:06.827 [2024-12-05 11:25:31.447854] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:06.827 [2024-12-05 11:25:31.449428] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:42:06.827 [2024-12-05 11:25:31.449512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:07.086 [2024-12-05 11:25:31.609943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:07.086 [2024-12-05 11:25:31.670159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:07.086 [2024-12-05 11:25:31.670227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:07.086 [2024-12-05 11:25:31.670243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:07.086 [2024-12-05 11:25:31.670256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:07.086 [2024-12-05 11:25:31.670268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:07.086 [2024-12-05 11:25:31.671387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:07.086 [2024-12-05 11:25:31.671565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:07.086 [2024-12-05 11:25:31.671620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:07.086 [2024-12-05 11:25:31.671631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:07.344 [2024-12-05 11:25:31.754417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:07.344 [2024-12-05 11:25:31.755081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:07.344 [2024-12-05 11:25:31.755312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:07.344 [2024-12-05 11:25:31.755731] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:07.344 [2024-12-05 11:25:31.755751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:07.344 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:07.344 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:42:07.344 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:42:07.344 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:07.344 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:07.344 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:07.344 11:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:07.601 [2024-12-05 11:25:32.121315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:07.601 11:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:07.860 11:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:42:07.860 11:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:08.119 11:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:42:08.119 11:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:08.377 11:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:42:08.377 11:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:08.636 11:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:42:08.636 11:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:42:08.893 11:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:09.150 11:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:42:09.409 11:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:09.409 11:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:42:09.409 11:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:09.975 11:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:42:09.975 11:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:42:10.233 11:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:10.491 11:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:10.491 11:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:10.749 11:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:10.749 11:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:42:11.007 11:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:11.007 [2024-12-05 11:25:35.621367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:11.007 11:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:42:11.265 11:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:42:11.523 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:11.523 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:42:11.523 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:42:11.523 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:11.523 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:42:11.523 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:42:11.523 11:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:42:14.052 11:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:14.052 11:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:14.052 11:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:14.052 11:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:42:14.052 11:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:14.052 11:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:42:14.052 11:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:14.052 [global] 00:42:14.052 thread=1 00:42:14.052 invalidate=1 00:42:14.052 rw=write 00:42:14.052 time_based=1 00:42:14.052 runtime=1 00:42:14.052 ioengine=libaio 00:42:14.052 direct=1 00:42:14.052 bs=4096 00:42:14.052 iodepth=1 00:42:14.052 norandommap=0 00:42:14.052 numjobs=1 00:42:14.052 00:42:14.052 verify_dump=1 00:42:14.052 verify_backlog=512 00:42:14.052 verify_state_save=0 00:42:14.052 do_verify=1 00:42:14.052 verify=crc32c-intel 00:42:14.052 [job0] 00:42:14.052 filename=/dev/nvme0n1 00:42:14.052 [job1] 00:42:14.052 filename=/dev/nvme0n2 00:42:14.052 [job2] 00:42:14.052 filename=/dev/nvme0n3 00:42:14.052 [job3] 00:42:14.052 filename=/dev/nvme0n4 00:42:14.052 Could not set queue depth (nvme0n1) 00:42:14.052 Could not set queue depth (nvme0n2) 00:42:14.052 Could not set queue depth (nvme0n3) 00:42:14.052 Could not set queue depth (nvme0n4) 00:42:14.052 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:14.052 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:14.052 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:14.052 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:14.052 fio-3.35 00:42:14.052 Starting 4 threads 00:42:14.992 00:42:14.992 job0: (groupid=0, jobs=1): err= 0: pid=106791: Thu Dec 5 11:25:39 2024 00:42:14.992 read: IOPS=2793, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:42:14.992 slat (nsec): min=8495, max=42198, avg=9712.50, stdev=1653.55 00:42:14.992 clat (usec): min=153, max=298, avg=186.44, stdev=13.59 00:42:14.992 lat (usec): min=162, max=308, avg=196.16, stdev=13.96 00:42:14.992 clat percentiles (usec): 00:42:14.992 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:42:14.992 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:42:14.992 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 212], 00:42:14.992 | 99.00th=[ 225], 99.50th=[ 229], 99.90th=[ 239], 99.95th=[ 247], 00:42:14.992 | 99.99th=[ 297] 00:42:14.992 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:42:14.992 slat (usec): min=9, max=101, avg=14.44, stdev= 5.52 00:42:14.992 clat (usec): min=99, max=574, avg=130.70, stdev=17.05 00:42:14.992 lat (usec): min=112, max=610, avg=145.13, stdev=19.13 00:42:14.992 clat percentiles (usec): 00:42:14.992 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 121], 00:42:14.992 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:42:14.992 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:42:14.992 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 343], 99.95th=[ 441], 00:42:14.992 | 99.99th=[ 578] 00:42:14.992 bw ( KiB/s): min=12288, max=12288, per=31.64%, avg=12288.00, stdev= 0.00, samples=1 00:42:14.992 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:42:14.992 lat (usec) : 100=0.02%, 250=99.86%, 500=0.10%, 750=0.02% 00:42:14.992 cpu : usr=1.30%, sys=5.60%, ctx=5868, majf=0, minf=15 00:42:14.992 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:14.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.992 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.992 issued rwts: total=2796,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.992 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:14.992 job1: (groupid=0, jobs=1): err= 0: pid=106792: Thu Dec 5 11:25:39 2024 00:42:14.992 read: IOPS=1584, BW=6339KiB/s (6491kB/s)(6352KiB/1002msec) 00:42:14.992 slat (nsec): min=7110, max=58084, avg=9914.26, stdev=2851.69 00:42:14.992 clat (usec): min=179, max=2224, avg=313.03, stdev=63.65 00:42:14.992 lat (usec): min=192, max=2232, avg=322.94, stdev=63.82 00:42:14.992 clat percentiles (usec): 00:42:14.992 | 1.00th=[ 202], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:42:14.992 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:42:14.992 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 388], 00:42:14.992 | 99.00th=[ 453], 99.50th=[ 502], 99.90th=[ 734], 99.95th=[ 2212], 00:42:14.992 | 99.99th=[ 2212] 00:42:14.992 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:42:14.992 slat (nsec): min=8865, max=94934, avg=15196.88, stdev=6623.37 00:42:14.992 clat (usec): min=138, max=323, avg=220.75, stdev=21.73 00:42:14.992 lat (usec): min=156, max=352, avg=235.95, stdev=21.57 00:42:14.992 clat percentiles (usec): 00:42:14.992 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:42:14.992 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:42:14.992 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 260], 00:42:14.992 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 322], 00:42:14.993 | 99.99th=[ 322] 00:42:14.993 bw ( KiB/s): min= 8192, max= 8192, per=21.09%, avg=8192.00, stdev= 0.00, samples=1 00:42:14.993 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:14.993 lat (usec) : 250=52.75%, 500=47.06%, 750=0.17% 00:42:14.993 lat (msec) : 4=0.03% 00:42:14.993 cpu : usr=1.50%, sys=3.40%, ctx=3637, majf=0, minf=17 00:42:14.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:14.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.993 issued rwts: total=1588,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:14.993 job2: (groupid=0, jobs=1): err= 0: pid=106793: Thu Dec 5 11:25:39 2024 00:42:14.993 read: IOPS=1587, BW=6350KiB/s (6502kB/s)(6356KiB/1001msec) 00:42:14.993 slat (nsec): min=7179, max=36261, avg=10014.99, stdev=2787.20 00:42:14.993 clat (usec): min=172, max=2148, avg=312.79, stdev=61.82 00:42:14.993 lat (usec): min=194, max=2156, avg=322.80, stdev=62.13 00:42:14.993 clat percentiles (usec): 00:42:14.993 | 1.00th=[ 208], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:42:14.993 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:42:14.993 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 383], 00:42:14.993 | 99.00th=[ 433], 99.50th=[ 506], 99.90th=[ 832], 99.95th=[ 2147], 00:42:14.993 | 99.99th=[ 2147] 00:42:14.993 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:42:14.993 slat (nsec): min=12549, max=52586, avg=17613.32, stdev=5451.87 00:42:14.993 clat (usec): min=134, max=316, avg=218.16, stdev=21.28 00:42:14.993 lat (usec): min=158, max=336, avg=235.78, stdev=21.46 00:42:14.993 clat percentiles (usec): 00:42:14.993 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:42:14.993 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:42:14.993 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 255], 00:42:14.993 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 306], 00:42:14.993 | 99.99th=[ 318] 00:42:14.993 bw ( KiB/s): min= 8192, max= 8192, per=21.09%, avg=8192.00, stdev= 0.00, samples=1 00:42:14.993 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:14.993 lat (usec) : 250=53.29%, 500=46.47%, 750=0.19%, 1000=0.03% 00:42:14.993 lat (msec) : 4=0.03% 00:42:14.993 cpu : usr=1.20%, sys=4.30%, ctx=3638, majf=0, minf=7 00:42:14.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:14.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.993 issued rwts: total=1589,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:14.993 job3: (groupid=0, jobs=1): err= 0: pid=106794: Thu Dec 5 11:25:39 2024 00:42:14.993 read: IOPS=2463, BW=9854KiB/s (10.1MB/s)(9864KiB/1001msec) 00:42:14.993 slat (nsec): min=8677, max=51468, avg=10849.06, stdev=2720.51 00:42:14.993 clat (usec): min=153, max=489, avg=215.06, stdev=19.66 00:42:14.993 lat (usec): min=162, max=510, avg=225.91, stdev=20.27 00:42:14.993 clat percentiles (usec): 00:42:14.993 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:42:14.993 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:42:14.993 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 245], 00:42:14.993 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 445], 99.95th=[ 469], 00:42:14.993 | 99.99th=[ 490] 00:42:14.993 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:42:14.993 slat (usec): min=12, max=100, avg=16.07, stdev= 5.98 00:42:14.993 clat (usec): min=124, max=452, avg=154.80, stdev=15.73 00:42:14.993 lat (usec): min=138, max=553, avg=170.87, stdev=18.61 00:42:14.993 clat percentiles (usec): 00:42:14.993 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:42:14.993 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:42:14.993 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:42:14.993 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 293], 99.95th=[ 338], 00:42:14.993 | 99.99th=[ 453] 00:42:14.993 bw ( KiB/s): min=12272, max=12272, per=31.60%, avg=12272.00, stdev= 0.00, samples=1 00:42:14.993 iops : min= 3068, max= 3068, avg=3068.00, stdev= 0.00, samples=1 00:42:14.993 lat (usec) : 250=98.49%, 500=1.51% 00:42:14.993 cpu : usr=1.00%, sys=5.50%, ctx=5026, majf=0, minf=7 00:42:14.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:14.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.993 issued rwts: total=2466,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:14.993 00:42:14.993 Run status group 0 (all jobs): 00:42:14.993 READ: bw=32.9MiB/s (34.5MB/s), 6339KiB/s-10.9MiB/s (6491kB/s-11.4MB/s), io=33.0MiB (34.6MB), run=1001-1002msec 00:42:14.993 WRITE: bw=37.9MiB/s (39.8MB/s), 8176KiB/s-12.0MiB/s (8372kB/s-12.6MB/s), io=38.0MiB (39.8MB), run=1001-1002msec 00:42:14.993 00:42:14.993 Disk stats (read/write): 00:42:14.993 nvme0n1: ios=2457/2560, merge=0/0, ticks=482/342, in_queue=824, util=86.87% 00:42:14.993 nvme0n2: ios=1537/1536, merge=0/0, ticks=489/327, in_queue=816, util=87.92% 00:42:14.993 nvme0n3: ios=1488/1536, merge=0/0, ticks=472/347, in_queue=819, util=88.92% 00:42:14.993 nvme0n4: ios=2048/2216, merge=0/0, ticks=454/353, in_queue=807, util=89.58% 00:42:14.993 11:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:42:14.993 [global] 00:42:14.993 thread=1 00:42:14.993 invalidate=1 00:42:14.993 rw=randwrite 00:42:14.993 time_based=1 00:42:14.993 runtime=1 00:42:14.993 ioengine=libaio 00:42:14.993 direct=1 00:42:14.993 bs=4096 00:42:14.993 iodepth=1 00:42:14.993 norandommap=0 00:42:14.993 numjobs=1 00:42:14.993 00:42:14.993 verify_dump=1 00:42:14.993 verify_backlog=512 00:42:14.993 verify_state_save=0 00:42:14.993 do_verify=1 00:42:14.993 verify=crc32c-intel 00:42:14.993 [job0] 00:42:14.993 filename=/dev/nvme0n1 00:42:14.993 [job1] 00:42:14.993 filename=/dev/nvme0n2 00:42:14.993 [job2] 00:42:14.993 filename=/dev/nvme0n3 00:42:14.993 [job3] 00:42:14.993 filename=/dev/nvme0n4 00:42:15.253 Could not set queue depth (nvme0n1) 00:42:15.253 Could not set queue depth (nvme0n2) 00:42:15.253 Could not set queue depth (nvme0n3) 00:42:15.253 Could not set queue depth (nvme0n4) 00:42:15.253 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.253 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.253 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.253 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:15.253 fio-3.35 00:42:15.253 Starting 4 threads 00:42:16.632 00:42:16.632 job0: (groupid=0, jobs=1): err= 0: pid=106847: Thu Dec 5 11:25:40 2024 00:42:16.632 read: IOPS=1615, BW=6462KiB/s (6617kB/s)(6468KiB/1001msec) 00:42:16.632 slat (nsec): min=6142, max=28002, avg=10174.44, stdev=2647.89 00:42:16.632 clat (usec): min=214, max=424, avg=294.51, stdev=38.90 00:42:16.632 lat (usec): min=222, max=435, avg=304.69, stdev=39.25 00:42:16.632 clat percentiles (usec): 00:42:16.632 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 260], 00:42:16.632 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:42:16.632 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 367], 00:42:16.632 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 412], 99.95th=[ 424], 00:42:16.632 | 99.99th=[ 424] 00:42:16.632 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:42:16.632 slat (nsec): min=8338, max=55478, avg=15246.10, stdev=4405.64 00:42:16.632 clat (usec): min=108, max=785, avg=230.65, stdev=40.67 00:42:16.632 lat (usec): min=126, max=800, avg=245.90, stdev=40.91 00:42:16.632 clat percentiles (usec): 00:42:16.632 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 198], 00:42:16.632 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 237], 00:42:16.632 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 297], 00:42:16.632 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 396], 99.95th=[ 734], 00:42:16.632 | 99.99th=[ 783] 00:42:16.632 bw ( KiB/s): min= 8208, max= 8208, per=25.07%, avg=8208.00, stdev= 0.00, samples=1 00:42:16.632 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:42:16.632 lat (usec) : 250=44.88%, 500=55.06%, 750=0.03%, 1000=0.03% 00:42:16.632 cpu : usr=1.10%, sys=3.90%, ctx=3666, majf=0, minf=5 00:42:16.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.632 issued rwts: total=1617,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:16.632 job1: (groupid=0, jobs=1): err= 0: pid=106848: Thu Dec 5 11:25:40 2024 00:42:16.632 read: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec) 00:42:16.632 slat (nsec): min=8602, max=34498, avg=10376.42, stdev=2949.94 00:42:16.632 clat (usec): min=149, max=327, avg=214.08, stdev=26.34 00:42:16.632 lat (usec): min=158, max=336, avg=224.45, stdev=26.85 00:42:16.632 clat percentiles (usec): 00:42:16.632 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 190], 00:42:16.632 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 221], 00:42:16.632 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 260], 00:42:16.632 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 322], 00:42:16.632 | 99.99th=[ 326] 00:42:16.632 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:42:16.632 slat (nsec): min=12222, max=88219, avg=14522.15, stdev=5093.36 00:42:16.632 clat (usec): min=96, max=524, avg=153.24, stdev=25.33 00:42:16.632 lat (usec): min=108, max=537, avg=167.76, stdev=26.77 00:42:16.632 clat percentiles (usec): 00:42:16.632 | 1.00th=[ 112], 5.00th=[ 121], 10.00th=[ 126], 20.00th=[ 133], 00:42:16.632 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 157], 00:42:16.632 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 198], 00:42:16.632 | 99.00th=[ 223], 99.50th=[ 239], 99.90th=[ 277], 99.95th=[ 302], 00:42:16.632 | 99.99th=[ 529] 00:42:16.632 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:42:16.632 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:42:16.632 lat (usec) : 100=0.06%, 250=95.39%, 500=4.53%, 750=0.02% 00:42:16.632 cpu : usr=1.80%, sys=4.30%, ctx=5079, majf=0, minf=15 00:42:16.632 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.632 issued rwts: total=2519,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:16.632 job2: (groupid=0, jobs=1): err= 0: pid=106849: Thu Dec 5 11:25:40 2024 00:42:16.632 read: IOPS=1388, BW=5554KiB/s (5688kB/s)(5560KiB/1001msec) 00:42:16.632 slat (nsec): min=18625, max=58234, avg=22257.71, stdev=3264.33 00:42:16.632 clat (usec): min=288, max=994, avg=360.42, stdev=29.21 00:42:16.632 lat (usec): min=309, max=1016, avg=382.68, stdev=29.34 00:42:16.632 clat percentiles (usec): 00:42:16.632 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:42:16.632 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:42:16.632 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 392], 95.00th=[ 400], 00:42:16.632 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 449], 99.95th=[ 996], 00:42:16.632 | 99.99th=[ 996] 00:42:16.632 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:42:16.632 slat (usec): min=21, max=106, avg=32.35, stdev= 5.60 00:42:16.632 clat (usec): min=196, max=4399, avg=268.58, stdev=112.07 00:42:16.632 lat (usec): min=224, max=4432, avg=300.93, stdev=112.39 00:42:16.632 clat percentiles (usec): 00:42:16.632 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 243], 00:42:16.632 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:42:16.632 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:42:16.632 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 1319], 99.95th=[ 4424], 00:42:16.632 | 99.99th=[ 4424] 00:42:16.632 bw ( KiB/s): min= 4560, max= 7728, per=18.77%, avg=6144.00, stdev=2240.11, samples=2 00:42:16.632 iops : min= 1140, max= 1932, avg=1536.00, stdev=560.03, samples=2 00:42:16.633 lat (usec) : 250=16.68%, 500=83.22%, 1000=0.03% 00:42:16.633 lat (msec) : 2=0.03%, 10=0.03% 00:42:16.633 cpu : usr=1.00%, sys=6.60%, ctx=2926, majf=0, minf=15 00:42:16.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.633 issued rwts: total=1390,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:16.633 job3: (groupid=0, jobs=1): err= 0: pid=106850: Thu Dec 5 11:25:40 2024 00:42:16.633 read: IOPS=1615, BW=6462KiB/s (6617kB/s)(6468KiB/1001msec) 00:42:16.633 slat (nsec): min=6209, max=34803, avg=9100.02, stdev=2937.52 00:42:16.633 clat (usec): min=211, max=427, avg=295.61, stdev=38.77 00:42:16.633 lat (usec): min=222, max=436, avg=304.71, stdev=39.46 00:42:16.633 clat percentiles (usec): 00:42:16.633 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 262], 00:42:16.633 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 302], 00:42:16.633 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 367], 00:42:16.633 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 429], 00:42:16.633 | 99.99th=[ 429] 00:42:16.633 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:42:16.633 slat (nsec): min=9640, max=94485, avg=16564.57, stdev=4590.04 00:42:16.633 clat (usec): min=118, max=864, avg=229.37, stdev=42.73 00:42:16.633 lat (usec): min=134, max=883, avg=245.94, stdev=43.37 00:42:16.633 clat percentiles (usec): 00:42:16.633 | 1.00th=[ 149], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 196], 00:42:16.633 | 30.00th=[ 206], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 235], 00:42:16.633 | 70.00th=[ 247], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 297], 00:42:16.633 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 437], 99.95th=[ 717], 00:42:16.633 | 99.99th=[ 865] 00:42:16.633 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:42:16.633 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:42:16.633 lat (usec) : 250=45.48%, 500=54.46%, 750=0.03%, 1000=0.03% 00:42:16.633 cpu : usr=0.80%, sys=4.30%, ctx=3665, majf=0, minf=13 00:42:16.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.633 issued rwts: total=1617,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:16.633 00:42:16.633 Run status group 0 (all jobs): 00:42:16.633 READ: bw=27.9MiB/s (29.2MB/s), 5554KiB/s-9.83MiB/s (5688kB/s-10.3MB/s), io=27.9MiB (29.3MB), run=1001-1001msec 00:42:16.633 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:42:16.633 00:42:16.633 Disk stats (read/write): 00:42:16.633 nvme0n1: ios=1586/1578, merge=0/0, ticks=458/353, in_queue=811, util=88.18% 00:42:16.633 nvme0n2: ios=2090/2358, merge=0/0, ticks=479/376, in_queue=855, util=88.77% 00:42:16.633 nvme0n3: ios=1042/1536, merge=0/0, ticks=622/439, in_queue=1061, util=92.88% 00:42:16.633 nvme0n4: ios=1536/1578, merge=0/0, ticks=425/381, in_queue=806, util=89.83% 00:42:16.633 11:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:42:16.633 [global] 00:42:16.633 thread=1 00:42:16.633 invalidate=1 00:42:16.633 rw=write 00:42:16.633 time_based=1 00:42:16.633 runtime=1 00:42:16.633 ioengine=libaio 00:42:16.633 direct=1 00:42:16.633 bs=4096 00:42:16.633 iodepth=128 00:42:16.633 norandommap=0 00:42:16.633 numjobs=1 00:42:16.633 00:42:16.633 verify_dump=1 00:42:16.633 verify_backlog=512 00:42:16.633 verify_state_save=0 00:42:16.633 do_verify=1 00:42:16.633 verify=crc32c-intel 00:42:16.633 [job0] 00:42:16.633 filename=/dev/nvme0n1 00:42:16.633 [job1] 00:42:16.633 filename=/dev/nvme0n2 00:42:16.633 [job2] 00:42:16.633 filename=/dev/nvme0n3 00:42:16.633 [job3] 00:42:16.633 filename=/dev/nvme0n4 00:42:16.633 Could not set queue depth (nvme0n1) 00:42:16.633 Could not set queue depth (nvme0n2) 00:42:16.633 Could not set queue depth (nvme0n3) 00:42:16.633 Could not set queue depth (nvme0n4) 00:42:16.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:16.633 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:16.633 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:16.633 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:16.633 fio-3.35 00:42:16.633 Starting 4 threads 00:42:18.008 00:42:18.008 job0: (groupid=0, jobs=1): err= 0: pid=106906: Thu Dec 5 11:25:42 2024 00:42:18.008 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:42:18.008 slat (usec): min=7, max=7704, avg=170.60, stdev=821.27 00:42:18.008 clat (usec): min=13674, max=31830, avg=21434.36, stdev=3466.25 00:42:18.008 lat (usec): min=13696, max=35398, avg=21604.97, stdev=3447.68 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[14615], 5.00th=[15664], 10.00th=[16450], 20.00th=[17957], 00:42:18.008 | 30.00th=[19006], 40.00th=[20579], 50.00th=[21627], 60.00th=[23200], 00:42:18.008 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25297], 95.00th=[26608], 00:42:18.008 | 99.00th=[28705], 99.50th=[30016], 99.90th=[31589], 99.95th=[31851], 00:42:18.008 | 99.99th=[31851] 00:42:18.008 write: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1003msec); 0 zone resets 00:42:18.008 slat (usec): min=11, max=7891, avg=177.05, stdev=600.07 00:42:18.008 clat (usec): min=2089, max=37037, avg=23575.45, stdev=5700.11 00:42:18.008 lat (usec): min=2740, max=37069, avg=23752.50, stdev=5708.62 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[ 7963], 5.00th=[17171], 10.00th=[17433], 20.00th=[19268], 00:42:18.008 | 30.00th=[20055], 40.00th=[21890], 50.00th=[22676], 60.00th=[24249], 00:42:18.008 | 70.00th=[25297], 80.00th=[29230], 90.00th=[33162], 95.00th=[33817], 00:42:18.008 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:42:18.008 | 99.99th=[36963] 00:42:18.008 bw ( KiB/s): min=10888, max=12312, per=18.22%, avg=11600.00, stdev=1006.92, samples=2 00:42:18.008 iops : min= 2722, max= 3078, avg=2900.00, stdev=251.73, samples=2 00:42:18.008 lat (msec) : 4=0.16%, 10=0.57%, 20=30.25%, 50=69.02% 00:42:18.008 cpu : usr=2.40%, sys=9.78%, ctx=382, majf=0, minf=9 00:42:18.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:42:18.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:18.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:18.008 issued rwts: total=2560,3024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:18.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:18.008 job1: (groupid=0, jobs=1): err= 0: pid=106907: Thu Dec 5 11:25:42 2024 00:42:18.008 read: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.7MiB/1004msec) 00:42:18.008 slat (usec): min=8, max=5875, avg=99.58, stdev=419.52 00:42:18.008 clat (usec): min=3829, max=25075, avg=12995.99, stdev=3468.71 00:42:18.008 lat (usec): min=3841, max=25089, avg=13095.57, stdev=3480.12 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:42:18.008 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11731], 60.00th=[12125], 00:42:18.008 | 70.00th=[12649], 80.00th=[13304], 90.00th=[20055], 95.00th=[21365], 00:42:18.008 | 99.00th=[23200], 99.50th=[23987], 99.90th=[24249], 99.95th=[25035], 00:42:18.008 | 99.99th=[25035] 00:42:18.008 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:42:18.008 slat (usec): min=11, max=3927, avg=89.37, stdev=327.41 00:42:18.008 clat (usec): min=8230, max=22692, avg=12007.94, stdev=2824.94 00:42:18.008 lat (usec): min=8273, max=23050, avg=12097.31, stdev=2837.28 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10290], 00:42:18.008 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11469], 60.00th=[11731], 00:42:18.008 | 70.00th=[11994], 80.00th=[12387], 90.00th=[13960], 95.00th=[20055], 00:42:18.008 | 99.00th=[21890], 99.50th=[21890], 99.90th=[22676], 99.95th=[22676], 00:42:18.008 | 99.99th=[22676] 00:42:18.008 bw ( KiB/s): min=16632, max=24328, per=32.18%, avg=20480.00, stdev=5441.89, samples=2 00:42:18.008 iops : min= 4158, max= 6082, avg=5120.00, stdev=1360.47, samples=2 00:42:18.008 lat (msec) : 4=0.04%, 10=9.03%, 20=83.26%, 50=7.67% 00:42:18.008 cpu : usr=4.99%, sys=12.66%, ctx=781, majf=0, minf=6 00:42:18.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:42:18.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:18.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:18.008 issued rwts: total=5035,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:18.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:18.008 job2: (groupid=0, jobs=1): err= 0: pid=106908: Thu Dec 5 11:25:42 2024 00:42:18.008 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:42:18.008 slat (usec): min=5, max=6119, avg=108.86, stdev=517.90 00:42:18.008 clat (usec): min=8183, max=25533, avg=13982.80, stdev=3350.22 00:42:18.008 lat (usec): min=8205, max=25550, avg=14091.66, stdev=3375.12 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[10552], 20.00th=[11600], 00:42:18.008 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[13566], 00:42:18.008 | 70.00th=[14484], 80.00th=[16319], 90.00th=[19530], 95.00th=[21103], 00:42:18.008 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25035], 99.95th=[25560], 00:42:18.008 | 99.99th=[25560] 00:42:18.008 write: IOPS=4752, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1005msec); 0 zone resets 00:42:18.008 slat (usec): min=8, max=5353, avg=96.77, stdev=379.45 00:42:18.008 clat (usec): min=3937, max=24743, avg=13104.38, stdev=2840.85 00:42:18.008 lat (usec): min=4472, max=24772, avg=13201.14, stdev=2861.73 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[ 7767], 5.00th=[10290], 10.00th=[10814], 20.00th=[11338], 00:42:18.008 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:42:18.008 | 70.00th=[13304], 80.00th=[13698], 90.00th=[17433], 95.00th=[19792], 00:42:18.008 | 99.00th=[22938], 99.50th=[23987], 99.90th=[24511], 99.95th=[24773], 00:42:18.008 | 99.99th=[24773] 00:42:18.008 bw ( KiB/s): min=16664, max=20528, per=29.22%, avg=18596.00, stdev=2732.26, samples=2 00:42:18.008 iops : min= 4166, max= 5132, avg=4649.00, stdev=683.07, samples=2 00:42:18.008 lat (msec) : 4=0.01%, 10=3.76%, 20=89.34%, 50=6.88% 00:42:18.008 cpu : usr=3.49%, sys=12.45%, ctx=746, majf=0, minf=5 00:42:18.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:42:18.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:18.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:18.008 issued rwts: total=4608,4776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:18.008 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:18.008 job3: (groupid=0, jobs=1): err= 0: pid=106909: Thu Dec 5 11:25:42 2024 00:42:18.008 read: IOPS=2587, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1002msec) 00:42:18.008 slat (usec): min=8, max=8318, avg=170.47, stdev=809.33 00:42:18.008 clat (usec): min=1714, max=30171, avg=21601.61, stdev=3859.06 00:42:18.008 lat (usec): min=1736, max=30190, avg=21772.08, stdev=3835.77 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[ 7832], 5.00th=[15926], 10.00th=[16909], 20.00th=[17957], 00:42:18.008 | 30.00th=[19268], 40.00th=[20579], 50.00th=[21890], 60.00th=[23462], 00:42:18.008 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25822], 95.00th=[27395], 00:42:18.008 | 99.00th=[28443], 99.50th=[29492], 99.90th=[30278], 99.95th=[30278], 00:42:18.008 | 99.99th=[30278] 00:42:18.008 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:42:18.008 slat (usec): min=13, max=6454, avg=171.77, stdev=576.29 00:42:18.008 clat (usec): min=9172, max=37157, avg=22880.24, stdev=5979.98 00:42:18.008 lat (usec): min=10055, max=37191, avg=23052.01, stdev=5998.32 00:42:18.008 clat percentiles (usec): 00:42:18.008 | 1.00th=[13304], 5.00th=[14615], 10.00th=[16581], 20.00th=[17695], 00:42:18.008 | 30.00th=[19792], 40.00th=[20317], 50.00th=[22414], 60.00th=[23200], 00:42:18.009 | 70.00th=[24249], 80.00th=[25035], 90.00th=[33817], 95.00th=[35390], 00:42:18.009 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:42:18.009 | 99.99th=[36963] 00:42:18.009 bw ( KiB/s): min=11536, max=12288, per=18.71%, avg=11912.00, stdev=531.74, samples=2 00:42:18.009 iops : min= 2884, max= 3072, avg=2978.00, stdev=132.94, samples=2 00:42:18.009 lat (msec) : 2=0.04%, 10=0.56%, 20=34.09%, 50=65.31% 00:42:18.009 cpu : usr=3.60%, sys=9.39%, ctx=383, majf=0, minf=5 00:42:18.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:42:18.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:18.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:18.009 issued rwts: total=2593,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:18.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:18.009 00:42:18.009 Run status group 0 (all jobs): 00:42:18.009 READ: bw=57.5MiB/s (60.3MB/s), 9.97MiB/s-19.6MiB/s (10.5MB/s-20.5MB/s), io=57.8MiB (60.6MB), run=1002-1005msec 00:42:18.009 WRITE: bw=62.2MiB/s (65.2MB/s), 11.8MiB/s-19.9MiB/s (12.3MB/s-20.9MB/s), io=62.5MiB (65.5MB), run=1002-1005msec 00:42:18.009 00:42:18.009 Disk stats (read/write): 00:42:18.009 nvme0n1: ios=2098/2519, merge=0/0, ticks=11033/14460, in_queue=25493, util=86.37% 00:42:18.009 nvme0n2: ios=4604/4608, merge=0/0, ticks=13099/11317, in_queue=24416, util=87.22% 00:42:18.009 nvme0n3: ios=4096/4291, merge=0/0, ticks=24790/23653, in_queue=48443, util=88.80% 00:42:18.009 nvme0n4: ios=2119/2560, merge=0/0, ticks=11196/14089, in_queue=25285, util=89.55% 00:42:18.009 11:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:42:18.009 [global] 00:42:18.009 thread=1 00:42:18.009 invalidate=1 00:42:18.009 rw=randwrite 00:42:18.009 time_based=1 00:42:18.009 runtime=1 00:42:18.009 ioengine=libaio 00:42:18.009 direct=1 00:42:18.009 bs=4096 00:42:18.009 iodepth=128 00:42:18.009 norandommap=0 00:42:18.009 numjobs=1 00:42:18.009 00:42:18.009 verify_dump=1 00:42:18.009 verify_backlog=512 00:42:18.009 verify_state_save=0 00:42:18.009 do_verify=1 00:42:18.009 verify=crc32c-intel 00:42:18.009 [job0] 00:42:18.009 filename=/dev/nvme0n1 00:42:18.009 [job1] 00:42:18.009 filename=/dev/nvme0n2 00:42:18.009 [job2] 00:42:18.009 filename=/dev/nvme0n3 00:42:18.009 [job3] 00:42:18.009 filename=/dev/nvme0n4 00:42:18.009 Could not set queue depth (nvme0n1) 00:42:18.009 Could not set queue depth (nvme0n2) 00:42:18.009 Could not set queue depth (nvme0n3) 00:42:18.009 Could not set queue depth (nvme0n4) 00:42:18.009 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:18.009 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:18.009 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:18.009 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:42:18.009 fio-3.35 00:42:18.009 Starting 4 threads 00:42:19.381 00:42:19.381 job0: (groupid=0, jobs=1): err= 0: pid=106969: Thu Dec 5 11:25:43 2024 00:42:19.381 read: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1008msec) 00:42:19.381 slat (usec): min=7, max=15843, avg=127.07, stdev=893.08 00:42:19.381 clat (usec): min=5986, max=32284, avg=17083.38, stdev=4071.51 00:42:19.381 lat (usec): min=6009, max=32321, avg=17210.45, stdev=4130.82 00:42:19.381 clat percentiles (usec): 00:42:19.381 | 1.00th=[ 8455], 5.00th=[11600], 10.00th=[12125], 20.00th=[13698], 00:42:19.381 | 30.00th=[15008], 40.00th=[15926], 50.00th=[16909], 60.00th=[17433], 00:42:19.381 | 70.00th=[18482], 80.00th=[19530], 90.00th=[22414], 95.00th=[24773], 00:42:19.381 | 99.00th=[30540], 99.50th=[31327], 99.90th=[32113], 99.95th=[32375], 00:42:19.381 | 99.99th=[32375] 00:42:19.381 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:42:19.381 slat (usec): min=6, max=14697, avg=117.49, stdev=865.81 00:42:19.381 clat (usec): min=4022, max=32219, avg=15399.85, stdev=2546.06 00:42:19.381 lat (usec): min=4051, max=32246, avg=15517.33, stdev=2670.53 00:42:19.381 clat percentiles (usec): 00:42:19.381 | 1.00th=[ 8029], 5.00th=[11076], 10.00th=[12649], 20.00th=[13698], 00:42:19.381 | 30.00th=[14222], 40.00th=[14877], 50.00th=[15664], 60.00th=[16319], 00:42:19.381 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:42:19.381 | 99.00th=[19006], 99.50th=[24773], 99.90th=[31851], 99.95th=[32113], 00:42:19.381 | 99.99th=[32113] 00:42:19.381 bw ( KiB/s): min=16384, max=16384, per=32.41%, avg=16384.00, stdev= 0.00, samples=2 00:42:19.381 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:42:19.381 lat (msec) : 10=2.15%, 20=88.51%, 50=9.34% 00:42:19.381 cpu : usr=4.47%, sys=11.12%, ctx=294, majf=0, minf=5 00:42:19.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:42:19.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:19.381 issued rwts: total=3782,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:19.381 job1: (groupid=0, jobs=1): err= 0: pid=106970: Thu Dec 5 11:25:43 2024 00:42:19.381 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:42:19.381 slat (usec): min=5, max=14758, avg=146.78, stdev=1018.49 00:42:19.381 clat (usec): min=9940, max=51799, avg=21084.59, stdev=7946.97 00:42:19.381 lat (usec): min=9958, max=54965, avg=21231.37, stdev=8031.83 00:42:19.381 clat percentiles (usec): 00:42:19.381 | 1.00th=[11469], 5.00th=[13042], 10.00th=[14222], 20.00th=[15401], 00:42:19.381 | 30.00th=[15926], 40.00th=[16712], 50.00th=[17957], 60.00th=[19530], 00:42:19.381 | 70.00th=[22676], 80.00th=[27395], 90.00th=[35390], 95.00th=[37487], 00:42:19.381 | 99.00th=[41681], 99.50th=[41681], 99.90th=[51643], 99.95th=[51643], 00:42:19.381 | 99.99th=[51643] 00:42:19.381 write: IOPS=3314, BW=12.9MiB/s (13.6MB/s)(13.1MiB/1010msec); 0 zone resets 00:42:19.381 slat (usec): min=5, max=20511, avg=155.73, stdev=1066.92 00:42:19.381 clat (usec): min=5231, max=45066, avg=18827.61, stdev=7013.13 00:42:19.381 lat (usec): min=5268, max=45091, avg=18983.34, stdev=7110.10 00:42:19.381 clat percentiles (usec): 00:42:19.381 | 1.00th=[ 8848], 5.00th=[11863], 10.00th=[13435], 20.00th=[14746], 00:42:19.381 | 30.00th=[15401], 40.00th=[16188], 50.00th=[16712], 60.00th=[17171], 00:42:19.381 | 70.00th=[17957], 80.00th=[23200], 90.00th=[31327], 95.00th=[34866], 00:42:19.381 | 99.00th=[43254], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:42:19.381 | 99.99th=[44827] 00:42:19.381 bw ( KiB/s): min= 9376, max=16416, per=25.51%, avg=12896.00, stdev=4978.03, samples=2 00:42:19.381 iops : min= 2344, max= 4104, avg=3224.00, stdev=1244.51, samples=2 00:42:19.381 lat (msec) : 10=1.25%, 20=69.16%, 50=29.53%, 100=0.06% 00:42:19.381 cpu : usr=2.97%, sys=9.81%, ctx=342, majf=0, minf=3 00:42:19.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:42:19.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:19.381 issued rwts: total=3072,3348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:19.381 job2: (groupid=0, jobs=1): err= 0: pid=106971: Thu Dec 5 11:25:43 2024 00:42:19.381 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:42:19.381 slat (usec): min=9, max=30064, avg=246.26, stdev=1944.36 00:42:19.381 clat (usec): min=9197, max=60789, avg=30564.46, stdev=8415.93 00:42:19.381 lat (usec): min=9217, max=60828, avg=30810.72, stdev=8552.94 00:42:19.381 clat percentiles (usec): 00:42:19.381 | 1.00th=[10159], 5.00th=[19268], 10.00th=[20579], 20.00th=[23462], 00:42:19.381 | 30.00th=[26870], 40.00th=[29230], 50.00th=[29754], 60.00th=[31327], 00:42:19.381 | 70.00th=[33817], 80.00th=[36439], 90.00th=[43779], 95.00th=[46924], 00:42:19.381 | 99.00th=[53740], 99.50th=[55313], 99.90th=[60556], 99.95th=[60556], 00:42:19.381 | 99.99th=[60556] 00:42:19.381 write: IOPS=2237, BW=8950KiB/s (9164kB/s)(9048KiB/1011msec); 0 zone resets 00:42:19.381 slat (usec): min=7, max=27119, avg=212.57, stdev=1664.60 00:42:19.381 clat (usec): min=3718, max=60060, avg=28918.60, stdev=6766.48 00:42:19.381 lat (usec): min=5870, max=60119, avg=29131.17, stdev=6914.59 00:42:19.381 clat percentiles (usec): 00:42:19.381 | 1.00th=[10159], 5.00th=[17695], 10.00th=[19792], 20.00th=[24249], 00:42:19.381 | 30.00th=[26346], 40.00th=[27657], 50.00th=[30278], 60.00th=[31065], 00:42:19.381 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[36963], 00:42:19.381 | 99.00th=[55313], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:42:19.381 | 99.99th=[60031] 00:42:19.381 bw ( KiB/s): min= 8208, max= 8880, per=16.90%, avg=8544.00, stdev=475.18, samples=2 00:42:19.381 iops : min= 2052, max= 2220, avg=2136.00, stdev=118.79, samples=2 00:42:19.381 lat (msec) : 4=0.02%, 10=0.44%, 20=8.31%, 50=89.63%, 100=1.60% 00:42:19.381 cpu : usr=2.38%, sys=6.83%, ctx=154, majf=0, minf=7 00:42:19.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:42:19.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:19.381 issued rwts: total=2048,2262,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:19.381 job3: (groupid=0, jobs=1): err= 0: pid=106972: Thu Dec 5 11:25:43 2024 00:42:19.381 read: IOPS=2704, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1011msec) 00:42:19.381 slat (usec): min=8, max=20245, avg=165.08, stdev=1170.42 00:42:19.381 clat (usec): min=7860, max=49659, avg=22613.94, stdev=7155.98 00:42:19.381 lat (usec): min=10929, max=49698, avg=22779.02, stdev=7241.16 00:42:19.381 clat percentiles (usec): 00:42:19.381 | 1.00th=[11207], 5.00th=[13435], 10.00th=[14746], 20.00th=[16712], 00:42:19.382 | 30.00th=[17695], 40.00th=[19268], 50.00th=[20841], 60.00th=[22938], 00:42:19.382 | 70.00th=[25297], 80.00th=[29230], 90.00th=[34866], 95.00th=[35914], 00:42:19.382 | 99.00th=[40633], 99.50th=[43254], 99.90th=[46924], 99.95th=[47449], 00:42:19.382 | 99.99th=[49546] 00:42:19.382 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:42:19.382 slat (usec): min=6, max=17326, avg=169.81, stdev=1206.23 00:42:19.382 clat (usec): min=5273, max=46157, avg=21486.78, stdev=6705.69 00:42:19.382 lat (usec): min=5327, max=46192, avg=21656.59, stdev=6835.77 00:42:19.382 clat percentiles (usec): 00:42:19.382 | 1.00th=[11076], 5.00th=[12911], 10.00th=[16319], 20.00th=[17433], 00:42:19.382 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18482], 60.00th=[19530], 00:42:19.382 | 70.00th=[21890], 80.00th=[27657], 90.00th=[32900], 95.00th=[35914], 00:42:19.382 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[43779], 00:42:19.382 | 99.99th=[46400] 00:42:19.382 bw ( KiB/s): min=11144, max=13458, per=24.33%, avg=12301.00, stdev=1636.25, samples=2 00:42:19.382 iops : min= 2786, max= 3364, avg=3075.00, stdev=408.71, samples=2 00:42:19.382 lat (msec) : 10=0.16%, 20=52.81%, 50=47.04% 00:42:19.382 cpu : usr=3.37%, sys=9.01%, ctx=249, majf=0, minf=6 00:42:19.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:42:19.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:42:19.382 issued rwts: total=2734,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:42:19.382 00:42:19.382 Run status group 0 (all jobs): 00:42:19.382 READ: bw=45.0MiB/s (47.1MB/s), 8103KiB/s-14.7MiB/s (8297kB/s-15.4MB/s), io=45.5MiB (47.7MB), run=1008-1011msec 00:42:19.382 WRITE: bw=49.4MiB/s (51.8MB/s), 8950KiB/s-15.9MiB/s (9164kB/s-16.6MB/s), io=49.9MiB (52.3MB), run=1008-1011msec 00:42:19.382 00:42:19.382 Disk stats (read/write): 00:42:19.382 nvme0n1: ios=3121/3381, merge=0/0, ticks=50830/50093, in_queue=100923, util=87.22% 00:42:19.382 nvme0n2: ios=2667/3072, merge=0/0, ticks=43871/46163, in_queue=90034, util=85.99% 00:42:19.382 nvme0n3: ios=1536/1961, merge=0/0, ticks=47532/55399, in_queue=102931, util=88.88% 00:42:19.382 nvme0n4: ios=2560/2591, merge=0/0, ticks=46597/43564, in_queue=90161, util=89.43% 00:42:19.382 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:42:19.382 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:42:19.382 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=106985 00:42:19.382 11:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:42:19.382 [global] 00:42:19.382 thread=1 00:42:19.382 invalidate=1 00:42:19.382 rw=read 00:42:19.382 time_based=1 00:42:19.382 runtime=10 00:42:19.382 ioengine=libaio 00:42:19.382 direct=1 00:42:19.382 bs=4096 00:42:19.382 iodepth=1 00:42:19.382 norandommap=1 00:42:19.382 numjobs=1 00:42:19.382 00:42:19.382 [job0] 00:42:19.382 filename=/dev/nvme0n1 00:42:19.382 [job1] 00:42:19.382 filename=/dev/nvme0n2 00:42:19.382 [job2] 00:42:19.382 filename=/dev/nvme0n3 00:42:19.382 [job3] 00:42:19.382 filename=/dev/nvme0n4 00:42:19.382 Could not set queue depth (nvme0n1) 00:42:19.382 Could not set queue depth (nvme0n2) 00:42:19.382 Could not set queue depth (nvme0n3) 00:42:19.382 Could not set queue depth (nvme0n4) 00:42:19.640 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:19.640 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:19.640 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:19.640 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:19.640 fio-3.35 00:42:19.640 Starting 4 threads 00:42:22.920 11:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:42:22.920 fio: pid=107028, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:22.920 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=49975296, buflen=4096 00:42:22.920 11:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:42:22.920 fio: pid=107027, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:22.920 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48521216, buflen=4096 00:42:22.920 11:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:22.920 11:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:42:23.179 fio: pid=107025, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:23.179 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=57720832, buflen=4096 00:42:23.179 11:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:23.179 11:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:42:23.439 fio: pid=107026, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:42:23.439 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44593152, buflen=4096 00:42:23.439 00:42:23.439 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107025: Thu Dec 5 11:25:47 2024 00:42:23.439 read: IOPS=4114, BW=16.1MiB/s (16.9MB/s)(55.0MiB/3425msec) 00:42:23.439 slat (usec): min=8, max=15263, avg=14.96, stdev=185.35 00:42:23.439 clat (usec): min=133, max=3732, avg=226.95, stdev=60.37 00:42:23.439 lat (usec): min=144, max=15828, avg=241.91, stdev=197.28 00:42:23.439 clat percentiles (usec): 00:42:23.439 | 1.00th=[ 157], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 194], 00:42:23.439 | 30.00th=[ 204], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 233], 00:42:23.439 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 289], 00:42:23.439 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 611], 99.95th=[ 881], 00:42:23.439 | 99.99th=[ 3130] 00:42:23.439 bw ( KiB/s): min=16359, max=17136, per=31.32%, avg=16751.17, stdev=279.36, samples=6 00:42:23.439 iops : min= 4089, max= 4284, avg=4187.50, stdev=69.98, samples=6 00:42:23.439 lat (usec) : 250=76.28%, 500=23.59%, 750=0.04%, 1000=0.04% 00:42:23.439 lat (msec) : 2=0.04%, 4=0.01% 00:42:23.439 cpu : usr=0.67%, sys=4.38%, ctx=14099, majf=0, minf=1 00:42:23.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.439 issued rwts: total=14093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.439 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107026: Thu Dec 5 11:25:47 2024 00:42:23.439 read: IOPS=2969, BW=11.6MiB/s (12.2MB/s)(42.5MiB/3667msec) 00:42:23.439 slat (usec): min=10, max=16037, avg=25.42, stdev=269.48 00:42:23.439 clat (usec): min=152, max=4603, avg=309.64, stdev=95.15 00:42:23.439 lat (usec): min=174, max=16408, avg=335.06, stdev=287.30 00:42:23.439 clat percentiles (usec): 00:42:23.439 | 1.00th=[ 180], 5.00th=[ 194], 10.00th=[ 219], 20.00th=[ 269], 00:42:23.439 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 326], 00:42:23.439 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 383], 00:42:23.439 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 1123], 99.95th=[ 2147], 00:42:23.439 | 99.99th=[ 3654] 00:42:23.439 bw ( KiB/s): min=10752, max=13328, per=21.95%, avg=11737.14, stdev=794.06, samples=7 00:42:23.439 iops : min= 2688, max= 3332, avg=2934.29, stdev=198.52, samples=7 00:42:23.439 lat (usec) : 250=13.94%, 500=85.69%, 750=0.17%, 1000=0.08% 00:42:23.439 lat (msec) : 2=0.06%, 4=0.05%, 10=0.01% 00:42:23.439 cpu : usr=1.01%, sys=4.91%, ctx=10896, majf=0, minf=2 00:42:23.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.439 issued rwts: total=10888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.439 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107027: Thu Dec 5 11:25:47 2024 00:42:23.439 read: IOPS=3692, BW=14.4MiB/s (15.1MB/s)(46.3MiB/3208msec) 00:42:23.439 slat (usec): min=11, max=11725, avg=21.85, stdev=143.69 00:42:23.439 clat (usec): min=161, max=2772, avg=247.33, stdev=59.46 00:42:23.439 lat (usec): min=178, max=12359, avg=269.18, stdev=158.36 00:42:23.439 clat percentiles (usec): 00:42:23.439 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 215], 00:42:23.439 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 251], 00:42:23.439 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 318], 00:42:23.439 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 922], 99.95th=[ 1270], 00:42:23.439 | 99.99th=[ 2278] 00:42:23.439 bw ( KiB/s): min=14299, max=15329, per=27.89%, avg=14915.50, stdev=410.39, samples=6 00:42:23.439 iops : min= 3574, max= 3832, avg=3728.67, stdev=102.73, samples=6 00:42:23.439 lat (usec) : 250=59.82%, 500=39.91%, 750=0.13%, 1000=0.06% 00:42:23.439 lat (msec) : 2=0.06%, 4=0.02% 00:42:23.439 cpu : usr=1.09%, sys=5.74%, ctx=11850, majf=0, minf=2 00:42:23.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.439 issued rwts: total=11847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.439 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=107028: Thu Dec 5 11:25:47 2024 00:42:23.439 read: IOPS=4136, BW=16.2MiB/s (16.9MB/s)(47.7MiB/2950msec) 00:42:23.440 slat (nsec): min=8060, max=98771, avg=11494.89, stdev=4201.64 00:42:23.440 clat (usec): min=138, max=7441, avg=229.33, stdev=93.51 00:42:23.440 lat (usec): min=147, max=7460, avg=240.83, stdev=94.15 00:42:23.440 clat percentiles (usec): 00:42:23.440 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 194], 00:42:23.440 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 223], 60.00th=[ 235], 00:42:23.440 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 297], 00:42:23.440 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 433], 99.95th=[ 1319], 00:42:23.440 | 99.99th=[ 4080] 00:42:23.440 bw ( KiB/s): min=14792, max=17400, per=30.58%, avg=16354.20, stdev=1027.11, samples=5 00:42:23.440 iops : min= 3698, max= 4350, avg=4088.20, stdev=256.78, samples=5 00:42:23.440 lat (usec) : 250=73.86%, 500=26.04%, 750=0.02% 00:42:23.440 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02% 00:42:23.440 cpu : usr=0.88%, sys=4.14%, ctx=12203, majf=0, minf=1 00:42:23.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:23.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.440 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:23.440 issued rwts: total=12202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:23.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:23.440 00:42:23.440 Run status group 0 (all jobs): 00:42:23.440 READ: bw=52.2MiB/s (54.8MB/s), 11.6MiB/s-16.2MiB/s (12.2MB/s-16.9MB/s), io=192MiB (201MB), run=2950-3667msec 00:42:23.440 00:42:23.440 Disk stats (read/write): 00:42:23.440 nvme0n1: ios=13860/0, merge=0/0, ticks=3188/0, in_queue=3188, util=94.93% 00:42:23.440 nvme0n2: ios=10620/0, merge=0/0, ticks=3331/0, in_queue=3331, util=94.75% 00:42:23.440 nvme0n3: ios=11496/0, merge=0/0, ticks=2890/0, in_queue=2890, util=96.20% 00:42:23.440 nvme0n4: ios=11791/0, merge=0/0, ticks=2733/0, in_queue=2733, util=96.65% 00:42:23.440 11:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:23.440 11:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:42:23.700 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:23.700 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:42:23.700 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:23.700 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:42:23.959 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:23.959 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:42:24.218 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:42:24.218 11:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 106985 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:24.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:24.546 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:24.804 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:24.804 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:24.804 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:42:24.804 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:42:24.804 nvmf hotplug test: fio failed as expected 00:42:24.804 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:42:24.804 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:25.063 rmmod nvme_tcp 00:42:25.063 rmmod nvme_fabrics 00:42:25.063 rmmod nvme_keyring 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 106511 ']' 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 106511 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 106511 ']' 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 106511 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106511 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:25.063 killing process with pid 106511 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106511' 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 106511 00:42:25.063 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 106511 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:42:25.322 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:42:25.581 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:42:25.581 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:42:25.581 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:42:25.581 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:42:25.581 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:42:25.581 00:42:25.581 real 0m19.350s 00:42:25.581 user 0m58.252s 00:42:25.581 sys 0m11.292s 00:42:25.581 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:25.581 ************************************ 00:42:25.581 END TEST nvmf_fio_target 00:42:25.581 ************************************ 00:42:25.581 11:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:25.581 ************************************ 00:42:25.581 START TEST nvmf_bdevio 00:42:25.581 ************************************ 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:42:25.581 * Looking for test storage... 00:42:25.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:25.581 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.841 --rc genhtml_branch_coverage=1 00:42:25.841 --rc genhtml_function_coverage=1 00:42:25.841 --rc genhtml_legend=1 00:42:25.841 --rc geninfo_all_blocks=1 00:42:25.841 --rc geninfo_unexecuted_blocks=1 00:42:25.841 00:42:25.841 ' 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.841 --rc genhtml_branch_coverage=1 00:42:25.841 --rc genhtml_function_coverage=1 00:42:25.841 --rc genhtml_legend=1 00:42:25.841 --rc geninfo_all_blocks=1 00:42:25.841 --rc geninfo_unexecuted_blocks=1 00:42:25.841 00:42:25.841 ' 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.841 --rc genhtml_branch_coverage=1 00:42:25.841 --rc genhtml_function_coverage=1 00:42:25.841 --rc genhtml_legend=1 00:42:25.841 --rc geninfo_all_blocks=1 00:42:25.841 --rc geninfo_unexecuted_blocks=1 00:42:25.841 00:42:25.841 ' 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:25.841 --rc genhtml_branch_coverage=1 00:42:25.841 --rc genhtml_function_coverage=1 00:42:25.841 --rc genhtml_legend=1 00:42:25.841 --rc geninfo_all_blocks=1 00:42:25.841 --rc geninfo_unexecuted_blocks=1 00:42:25.841 00:42:25.841 ' 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:42:25.841 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@280 -- # nvmf_veth_init 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@223 -- # create_target_ns 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # create_main_bridge 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@105 -- # delete_main_bridge 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:42:25.842 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator0 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target0 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0 up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target0 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:42:25.843 10.0.0.1 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:42:25.843 10.0.0.2 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator0 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target0_br 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:42:25.843 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator1 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:25.844 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target1 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1 up 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target1_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target1 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772163 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:42:26.105 10.0.0.3 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772164 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:42:26.105 10.0.0.4 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator1 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:26.105 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target1_br 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 2 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:42:26.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:26.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:42:26.106 00:42:26.106 --- 10.0.0.1 ping statistics --- 00:42:26.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.106 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:42:26.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:26.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:42:26.106 00:42:26.106 --- 10.0.0.2 ping statistics --- 00:42:26.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.106 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:42:26.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:26.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:42:26.106 00:42:26.106 --- 10.0.0.3 ping statistics --- 00:42:26.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.106 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:26.106 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:42:26.107 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:26.107 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:42:26.107 00:42:26.107 --- 10.0.0.4 ping statistics --- 00:42:26.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:26.107 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # return 0 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:26.107 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:42:26.366 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:42:26.366 ' 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=107401 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 107401 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 107401 ']' 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:26.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:26.367 11:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:26.367 [2024-12-05 11:25:50.903142] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:26.367 [2024-12-05 11:25:50.904873] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:42:26.367 [2024-12-05 11:25:50.904971] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:26.625 [2024-12-05 11:25:51.063666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:26.625 [2024-12-05 11:25:51.159064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:26.625 [2024-12-05 11:25:51.159145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:26.625 [2024-12-05 11:25:51.159162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:26.625 [2024-12-05 11:25:51.159175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:26.625 [2024-12-05 11:25:51.159187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:26.625 [2024-12-05 11:25:51.161099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:26.625 [2024-12-05 11:25:51.161201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:42:26.625 [2024-12-05 11:25:51.161252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:26.626 [2024-12-05 11:25:51.161251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:42:26.884 [2024-12-05 11:25:51.309435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:26.884 [2024-12-05 11:25:51.310237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:26.884 [2024-12-05 11:25:51.310347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:26.884 [2024-12-05 11:25:51.311428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:26.884 [2024-12-05 11:25:51.312009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:27.451 [2024-12-05 11:25:51.923068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:27.451 Malloc0 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.451 11:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:27.451 [2024-12-05 11:25:52.015304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:42:27.451 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:42:27.451 { 00:42:27.451 "params": { 00:42:27.451 "name": "Nvme$subsystem", 00:42:27.451 "trtype": "$TEST_TRANSPORT", 00:42:27.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.451 "adrfam": "ipv4", 00:42:27.451 "trsvcid": "$NVMF_PORT", 00:42:27.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.451 "hdgst": ${hdgst:-false}, 00:42:27.451 "ddgst": ${ddgst:-false} 00:42:27.451 }, 00:42:27.451 "method": "bdev_nvme_attach_controller" 00:42:27.451 } 00:42:27.451 EOF 00:42:27.452 )") 00:42:27.452 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:42:27.452 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:42:27.452 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:42:27.452 11:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:42:27.452 "params": { 00:42:27.452 "name": "Nvme1", 00:42:27.452 "trtype": "tcp", 00:42:27.452 "traddr": "10.0.0.2", 00:42:27.452 "adrfam": "ipv4", 00:42:27.452 "trsvcid": "4420", 00:42:27.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:27.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:27.452 "hdgst": false, 00:42:27.452 "ddgst": false 00:42:27.452 }, 00:42:27.452 "method": "bdev_nvme_attach_controller" 00:42:27.452 }' 00:42:27.452 [2024-12-05 11:25:52.099358] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:42:27.452 [2024-12-05 11:25:52.099501] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107456 ] 00:42:27.709 [2024-12-05 11:25:52.270478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:27.709 [2024-12-05 11:25:52.337703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:27.709 [2024-12-05 11:25:52.337776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:27.709 [2024-12-05 11:25:52.337783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:27.968 I/O targets: 00:42:27.968 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:42:27.968 00:42:27.968 00:42:27.968 CUnit - A unit testing framework for C - Version 2.1-3 00:42:27.968 http://cunit.sourceforge.net/ 00:42:27.968 00:42:27.968 00:42:27.968 Suite: bdevio tests on: Nvme1n1 00:42:27.968 Test: blockdev write read block ...passed 00:42:27.968 Test: blockdev write zeroes read block ...passed 00:42:27.968 Test: blockdev write zeroes read no split ...passed 00:42:27.968 Test: blockdev write zeroes read split ...passed 00:42:28.225 Test: blockdev write zeroes read split partial ...passed 00:42:28.225 Test: blockdev reset ...[2024-12-05 11:25:52.625770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:42:28.225 [2024-12-05 11:25:52.625871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x624f50 (9): Bad file descriptor 00:42:28.225 [2024-12-05 11:25:52.629562] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:42:28.225 passed 00:42:28.225 Test: blockdev write read 8 blocks ...passed 00:42:28.225 Test: blockdev write read size > 128k ...passed 00:42:28.225 Test: blockdev write read invalid size ...passed 00:42:28.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:28.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:28.225 Test: blockdev write read max offset ...passed 00:42:28.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:28.225 Test: blockdev writev readv 8 blocks ...passed 00:42:28.225 Test: blockdev writev readv 30 x 1block ...passed 00:42:28.225 Test: blockdev writev readv block ...passed 00:42:28.225 Test: blockdev writev readv size > 128k ...passed 00:42:28.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:28.226 Test: blockdev comparev and writev ...[2024-12-05 11:25:52.802229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.802286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:42:28.226 [2024-12-05 11:25:52.802304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.802315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:28.226 [2024-12-05 11:25:52.802831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.802851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:42:28.226 [2024-12-05 11:25:52.802875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.802891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:42:28.226 [2024-12-05 11:25:52.803339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.803359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:42:28.226 [2024-12-05 11:25:52.803374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.803385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:42:28.226 [2024-12-05 11:25:52.803920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.803950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:42:28.226 [2024-12-05 11:25:52.803969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:42:28.226 [2024-12-05 11:25:52.803990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:42:28.226 passed 00:42:28.483 Test: blockdev nvme passthru rw ...passed 00:42:28.483 Test: blockdev nvme passthru vendor specific ...[2024-12-05 11:25:52.886978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:28.483 [2024-12-05 11:25:52.887015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:42:28.483 [2024-12-05 11:25:52.887143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:28.483 [2024-12-05 11:25:52.887154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:42:28.483 [2024-12-05 11:25:52.887270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:28.483 [2024-12-05 11:25:52.887281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:42:28.483 passed 00:42:28.483 Test: blockdev nvme admin passthru ...[2024-12-05 11:25:52.887391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:42:28.483 [2024-12-05 11:25:52.887402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:42:28.483 passed 00:42:28.483 Test: blockdev copy ...passed 00:42:28.483 00:42:28.483 Run Summary: Type Total Ran Passed Failed Inactive 00:42:28.483 suites 1 1 n/a 0 0 00:42:28.483 tests 23 23 23 0 0 00:42:28.483 asserts 152 152 152 0 n/a 00:42:28.483 00:42:28.483 Elapsed time = 0.895 seconds 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:28.483 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:28.741 rmmod nvme_tcp 00:42:28.741 rmmod nvme_fabrics 00:42:28.741 rmmod nvme_keyring 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 107401 ']' 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 107401 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 107401 ']' 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 107401 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107401 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:42:28.741 killing process with pid 107401 00:42:28.741 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107401' 00:42:28.742 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 107401 00:42:28.742 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 107401 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:42:29.000 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:42:29.259 00:42:29.259 real 0m3.735s 00:42:29.259 user 0m7.511s 00:42:29.259 sys 0m1.529s 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.259 ************************************ 00:42:29.259 END TEST nvmf_bdevio 00:42:29.259 ************************************ 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:42:29.259 00:42:29.259 real 3m33.141s 00:42:29.259 user 9m8.002s 00:42:29.259 sys 1m34.003s 00:42:29.259 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:29.260 11:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:29.260 ************************************ 00:42:29.260 END TEST nvmf_target_core_interrupt_mode 00:42:29.260 ************************************ 00:42:29.260 11:25:53 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:29.260 11:25:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:29.260 11:25:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:29.260 11:25:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:29.260 ************************************ 00:42:29.260 START TEST nvmf_interrupt 00:42:29.260 ************************************ 00:42:29.260 11:25:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:42:29.519 * Looking for test storage... 00:42:29.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:29.519 11:25:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:29.519 11:25:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:42:29.519 11:25:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:29.519 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:29.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.519 --rc genhtml_branch_coverage=1 00:42:29.519 --rc genhtml_function_coverage=1 00:42:29.519 --rc genhtml_legend=1 00:42:29.519 --rc geninfo_all_blocks=1 00:42:29.520 --rc geninfo_unexecuted_blocks=1 00:42:29.520 00:42:29.520 ' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:29.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.520 --rc genhtml_branch_coverage=1 00:42:29.520 --rc genhtml_function_coverage=1 00:42:29.520 --rc genhtml_legend=1 00:42:29.520 --rc geninfo_all_blocks=1 00:42:29.520 --rc geninfo_unexecuted_blocks=1 00:42:29.520 00:42:29.520 ' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:29.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.520 --rc genhtml_branch_coverage=1 00:42:29.520 --rc genhtml_function_coverage=1 00:42:29.520 --rc genhtml_legend=1 00:42:29.520 --rc geninfo_all_blocks=1 00:42:29.520 --rc geninfo_unexecuted_blocks=1 00:42:29.520 00:42:29.520 ' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:29.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:29.520 --rc genhtml_branch_coverage=1 00:42:29.520 --rc genhtml_function_coverage=1 00:42:29.520 --rc genhtml_legend=1 00:42:29.520 --rc geninfo_all_blocks=1 00:42:29.520 --rc geninfo_unexecuted_blocks=1 00:42:29.520 00:42:29.520 ' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@280 -- # nvmf_veth_init 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@223 -- # create_target_ns 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # create_main_bridge 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@105 -- # delete_main_bridge 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # return 0 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:42:29.520 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up initiator0 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:29.521 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up target0 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target0 up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up target0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns target0 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:42:29.779 10.0.0.1 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:42:29.779 10.0.0.2 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up initiator0 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up target0_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up initiator1 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up target1 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target1 up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up target1_br 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:29.779 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns target1 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772163 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:42:29.780 10.0.0.3 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772164 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:42:29.780 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:42:30.039 10.0.0.4 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up initiator1 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:42:30.039 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up target1_br 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 2 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:42:30.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:30.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:42:30.040 00:42:30.040 --- 10.0.0.1 ping statistics --- 00:42:30.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.040 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target0 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:42:30.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:30.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:42:30.040 00:42:30.040 --- 10.0.0.2 ping statistics --- 00:42:30.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.040 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:42:30.040 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:30.040 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:42:30.040 00:42:30.040 --- 10.0.0.3 ping statistics --- 00:42:30.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.040 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:42:30.040 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:42:30.040 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:30.041 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:42:30.041 00:42:30.041 --- 10.0.0.4 ping statistics --- 00:42:30.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.041 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # return 0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target0 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target1 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:42:30.041 ' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:42:30.041 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:42:30.298 11:25:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:42:30.298 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:42:30.298 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:30.298 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.298 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=107708 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 107708 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 107708 ']' 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:30.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:30.299 11:25:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.299 [2024-12-05 11:25:54.755017] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:30.299 [2024-12-05 11:25:54.756505] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:42:30.299 [2024-12-05 11:25:54.756598] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:30.299 [2024-12-05 11:25:54.903333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:30.559 [2024-12-05 11:25:54.985396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:30.559 [2024-12-05 11:25:54.985765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:30.559 [2024-12-05 11:25:54.985786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:30.559 [2024-12-05 11:25:54.985796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:30.559 [2024-12-05 11:25:54.985804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:30.559 [2024-12-05 11:25:54.987499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:30.559 [2024-12-05 11:25:54.987500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:30.559 [2024-12-05 11:25:55.124156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:30.559 [2024-12-05 11:25:55.125020] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:30.559 [2024-12-05 11:25:55.125040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:30.559 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:30.559 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:42:30.559 11:25:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:42:30.559 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:30.559 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:42:30.815 5000+0 records in 00:42:30.815 5000+0 records out 00:42:30.815 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0439737 s, 233 MB/s 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.815 AIO0 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.815 [2024-12-05 11:25:55.329045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:30.815 [2024-12-05 11:25:55.369468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107708 0 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107708 0 idle 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:30.815 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107708 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.33 reactor_0' 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107708 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.33 reactor_0 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107708 1 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107708 1 idle 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:31.072 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107713 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1' 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107713 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=107772 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107708 0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107708 0 busy 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107708 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.33 reactor_0' 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107708 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.33 reactor_0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:31.329 11:25:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:42:32.699 11:25:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:42:32.699 11:25:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:32.699 11:25:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:32.699 11:25:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107708 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.69 reactor_0' 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107708 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.69 reactor_0 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107708 1 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107708 1 busy 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107713 root 20 0 64.2g 46336 33152 R 60.0 0.4 0:00.78 reactor_1' 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107713 root 20 0 64.2g 46336 33152 R 60.0 0.4 0:00.78 reactor_1 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:32.699 11:25:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 107772 00:42:42.672 Initializing NVMe Controllers 00:42:42.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:42.672 Controller IO queue size 256, less than required. 00:42:42.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:42.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:42.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:42.672 Initialization complete. Launching workers. 00:42:42.672 ======================================================== 00:42:42.672 Latency(us) 00:42:42.672 Device Information : IOPS MiB/s Average min max 00:42:42.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6740.10 26.33 38038.00 7099.67 60161.95 00:42:42.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6863.60 26.81 37343.96 9382.00 65061.67 00:42:42.672 ======================================================== 00:42:42.672 Total : 13603.69 53.14 37687.83 7099.67 65061.67 00:42:42.672 00:42:42.672 11:26:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:42.672 11:26:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107708 0 00:42:42.672 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107708 0 idle 00:42:42.672 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:42.673 11:26:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107708 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:12.84 reactor_0' 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107708 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:12.84 reactor_0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107708 1 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107708 1 idle 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107713 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.26 reactor_1' 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107713 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.26 reactor_1 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:42.673 11:26:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107708 0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107708 0 idle 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107708 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:12.89 reactor_0' 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107708 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:12.89 reactor_0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107708 1 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107708 1 idle 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107708 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107708 -w 256 00:42:44.142 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107713 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.27 reactor_1' 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107713 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.27 reactor_1 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:44.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:44.400 11:26:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:44.658 rmmod nvme_tcp 00:42:44.658 rmmod nvme_fabrics 00:42:44.658 rmmod nvme_keyring 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 107708 ']' 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 107708 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 107708 ']' 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 107708 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107708 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:44.658 killing process with pid 107708 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107708' 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 107708 00:42:44.658 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 107708 00:42:44.917 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:42:44.917 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:42:44.917 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@254 -- # local dev 00:42:44.917 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # remove_target_ns 00:42:44.917 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:44.917 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:42:44.917 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # delete_main_bridge 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # continue 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # continue 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@274 -- # iptr 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-save 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-restore 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:42:45.176 00:42:45.176 real 0m15.847s 00:42:45.176 user 0m27.048s 00:42:45.176 sys 0m7.891s 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.176 11:26:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:42:45.176 ************************************ 00:42:45.176 END TEST nvmf_interrupt 00:42:45.176 ************************************ 00:42:45.176 ************************************ 00:42:45.176 END TEST nvmf_tcp 00:42:45.176 ************************************ 00:42:45.176 00:42:45.176 real 20m38.044s 00:42:45.176 user 52m37.867s 00:42:45.176 sys 6m15.330s 00:42:45.176 11:26:09 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.176 11:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.176 11:26:09 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:42:45.176 11:26:09 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:45.176 11:26:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:45.176 11:26:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:45.176 11:26:09 -- common/autotest_common.sh@10 -- # set +x 00:42:45.176 ************************************ 00:42:45.176 START TEST spdkcli_nvmf_tcp 00:42:45.176 ************************************ 00:42:45.176 11:26:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:42:45.435 * Looking for test storage... 00:42:45.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:45.435 11:26:09 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.435 --rc genhtml_branch_coverage=1 00:42:45.435 --rc genhtml_function_coverage=1 00:42:45.435 --rc genhtml_legend=1 00:42:45.435 --rc geninfo_all_blocks=1 00:42:45.435 --rc geninfo_unexecuted_blocks=1 00:42:45.435 00:42:45.435 ' 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.435 --rc genhtml_branch_coverage=1 00:42:45.435 --rc genhtml_function_coverage=1 00:42:45.435 --rc genhtml_legend=1 00:42:45.435 --rc geninfo_all_blocks=1 00:42:45.435 --rc geninfo_unexecuted_blocks=1 00:42:45.435 00:42:45.435 ' 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.435 --rc genhtml_branch_coverage=1 00:42:45.435 --rc genhtml_function_coverage=1 00:42:45.435 --rc genhtml_legend=1 00:42:45.435 --rc geninfo_all_blocks=1 00:42:45.435 --rc geninfo_unexecuted_blocks=1 00:42:45.435 00:42:45.435 ' 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:45.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:45.435 --rc genhtml_branch_coverage=1 00:42:45.435 --rc genhtml_function_coverage=1 00:42:45.435 --rc genhtml_legend=1 00:42:45.435 --rc geninfo_all_blocks=1 00:42:45.435 --rc geninfo_unexecuted_blocks=1 00:42:45.435 00:42:45.435 ' 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:42:45.435 11:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:42:45.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108094 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 108094 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 108094 ']' 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:45.436 11:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:45.695 [2024-12-05 11:26:10.143837] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:42:45.695 [2024-12-05 11:26:10.144000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108094 ] 00:42:45.695 [2024-12-05 11:26:10.311278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:45.952 [2024-12-05 11:26:10.378466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:45.952 [2024-12-05 11:26:10.378481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.520 11:26:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:46.520 11:26:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:42:46.520 11:26:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:42:46.520 11:26:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:46.520 11:26:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:46.780 11:26:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:42:46.780 11:26:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:42:46.780 11:26:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:42:46.780 11:26:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:46.780 11:26:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:46.780 11:26:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:42:46.780 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:42:46.780 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:42:46.780 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:42:46.780 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:42:46.780 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:42:46.780 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:42:46.780 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:46.780 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:46.780 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:42:46.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:42:46.780 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:42:46.780 ' 00:42:50.071 [2024-12-05 11:26:14.009290] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:51.021 [2024-12-05 11:26:15.327228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:53.552 [2024-12-05 11:26:17.777611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:55.454 [2024-12-05 11:26:19.915228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:57.395 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:57.395 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:57.395 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:57.395 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:57.395 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:57.395 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:57.395 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:57.395 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.395 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.395 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:57.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:57.395 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:57.395 11:26:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:57.652 11:26:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:57.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:57.652 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:57.653 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:57.653 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:57.653 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:57.653 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:57.653 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:57.653 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:57.653 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:57.653 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:57.653 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:57.653 ' 00:43:04.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:43:04.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:43:04.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:04.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:43:04.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:43:04.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:43:04.211 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:43:04.211 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:43:04.211 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:43:04.211 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:43:04.211 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:43:04.211 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:43:04.211 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:43:04.211 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 108094 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108094 ']' 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108094 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:04.211 11:26:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108094 00:43:04.211 killing process with pid 108094 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108094' 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 108094 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 108094 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 108094 ']' 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 108094 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 108094 ']' 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 108094 00:43:04.211 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (108094) - No such process 00:43:04.211 Process with pid 108094 is not found 00:43:04.211 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 108094 is not found' 00:43:04.212 11:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:43:04.212 11:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:43:04.212 11:26:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:43:04.212 00:43:04.212 real 0m18.503s 00:43:04.212 user 0m40.326s 00:43:04.212 sys 0m1.030s 00:43:04.212 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:04.212 11:26:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:04.212 ************************************ 00:43:04.212 END TEST spdkcli_nvmf_tcp 00:43:04.212 ************************************ 00:43:04.212 11:26:28 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:04.212 11:26:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:04.212 11:26:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:04.212 11:26:28 -- common/autotest_common.sh@10 -- # set +x 00:43:04.212 ************************************ 00:43:04.212 START TEST nvmf_identify_passthru 00:43:04.212 ************************************ 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:43:04.212 * Looking for test storage... 00:43:04.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.212 --rc genhtml_branch_coverage=1 00:43:04.212 --rc genhtml_function_coverage=1 00:43:04.212 --rc genhtml_legend=1 00:43:04.212 --rc geninfo_all_blocks=1 00:43:04.212 --rc geninfo_unexecuted_blocks=1 00:43:04.212 00:43:04.212 ' 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.212 --rc genhtml_branch_coverage=1 00:43:04.212 --rc genhtml_function_coverage=1 00:43:04.212 --rc genhtml_legend=1 00:43:04.212 --rc geninfo_all_blocks=1 00:43:04.212 --rc geninfo_unexecuted_blocks=1 00:43:04.212 00:43:04.212 ' 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.212 --rc genhtml_branch_coverage=1 00:43:04.212 --rc genhtml_function_coverage=1 00:43:04.212 --rc genhtml_legend=1 00:43:04.212 --rc geninfo_all_blocks=1 00:43:04.212 --rc geninfo_unexecuted_blocks=1 00:43:04.212 00:43:04.212 ' 00:43:04.212 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:04.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:04.212 --rc genhtml_branch_coverage=1 00:43:04.212 --rc genhtml_function_coverage=1 00:43:04.212 --rc genhtml_legend=1 00:43:04.212 --rc geninfo_all_blocks=1 00:43:04.212 --rc geninfo_unexecuted_blocks=1 00:43:04.212 00:43:04.212 ' 00:43:04.212 11:26:28 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:04.212 11:26:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:04.212 11:26:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.212 11:26:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.212 11:26:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.212 11:26:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:04.212 11:26:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:43:04.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:43:04.212 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:43:04.212 11:26:28 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:04.213 11:26:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:43:04.213 11:26:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:04.213 11:26:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.213 11:26:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.213 11:26:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.213 11:26:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:43:04.213 11:26:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:04.213 11:26:28 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:04.213 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:04.213 11:26:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@280 -- # nvmf_veth_init 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@223 -- # create_target_ns 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # create_main_bridge 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@105 -- # delete_main_bridge 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # return 0 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up initiator0 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up target0 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target0 up 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up target0_br 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns target0 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:43:04.213 10.0.0.1 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.213 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:43:04.214 10.0.0.2 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up initiator0 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up target0_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up initiator1 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up target1 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target1 up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up target1_br 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns target1 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:43:04.214 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772163 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:43:04.474 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:43:04.474 10.0.0.3 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772164 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:43:04.475 10.0.0.4 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up initiator1 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up target1_br 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 2 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:43:04.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:04.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:43:04.475 00:43:04.475 --- 10.0.0.1 ping statistics --- 00:43:04.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:04.475 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target0 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:43:04.475 11:26:28 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:43:04.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:04.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:43:04.475 00:43:04.475 --- 10.0.0.2 ping statistics --- 00:43:04.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:04.475 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:43:04.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:04.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:43:04.475 00:43:04.475 --- 10.0.0.3 ping statistics --- 00:43:04.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:04.475 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:43:04.475 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:04.475 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:43:04.475 00:43:04.475 --- 10.0.0.4 ping statistics --- 00:43:04.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:04.475 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@281 -- # return 0 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:04.475 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator0 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator0 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target0 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target0 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target1 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:43:04.476 ' 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:04.476 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:43:04.734 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:43:04.734 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:04.734 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:43:04.734 11:26:29 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:43:04.734 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:04.734 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:43:04.734 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:43:04.734 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:43:04.734 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:43:04.734 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:43:04.734 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:43:04.734 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:43:05.010 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:43:05.010 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:43:05.010 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:43:05.010 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:43:05.010 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:43:05.273 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.273 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.273 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=108637 00:43:05.273 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:05.273 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 108637 00:43:05.273 11:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 108637 ']' 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:05.273 11:26:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:05.273 [2024-12-05 11:26:29.786623] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:43:05.273 [2024-12-05 11:26:29.787377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.532 [2024-12-05 11:26:29.947652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:05.532 [2024-12-05 11:26:30.044645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:05.532 [2024-12-05 11:26:30.044731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:05.532 [2024-12-05 11:26:30.044748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:05.532 [2024-12-05 11:26:30.044764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:05.532 [2024-12-05 11:26:30.044776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:05.532 [2024-12-05 11:26:30.046468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.532 [2024-12-05 11:26:30.046662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:05.532 [2024-12-05 11:26:30.046664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.532 [2024-12-05 11:26:30.046566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:06.469 11:26:30 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:06.469 11:26:30 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:43:06.469 11:26:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:43:06.469 11:26:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.469 11:26:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.469 11:26:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.469 11:26:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:43:06.469 11:26:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.469 11:26:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.469 [2024-12-05 11:26:31.041449] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.469 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.469 [2024-12-05 11:26:31.055690] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.469 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.469 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.469 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.728 Nvme0n1 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.728 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.728 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.728 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.728 [2024-12-05 11:26:31.217829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.728 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:06.728 [ 00:43:06.728 { 00:43:06.728 "allow_any_host": true, 00:43:06.728 "hosts": [], 00:43:06.728 "listen_addresses": [], 00:43:06.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:43:06.728 "subtype": "Discovery" 00:43:06.728 }, 00:43:06.728 { 00:43:06.728 "allow_any_host": true, 00:43:06.728 "hosts": [], 00:43:06.728 "listen_addresses": [ 00:43:06.728 { 00:43:06.728 "adrfam": "IPv4", 00:43:06.728 "traddr": "10.0.0.2", 00:43:06.728 "trsvcid": "4420", 00:43:06.728 "trtype": "TCP" 00:43:06.728 } 00:43:06.728 ], 00:43:06.728 "max_cntlid": 65519, 00:43:06.728 "max_namespaces": 1, 00:43:06.728 "min_cntlid": 1, 00:43:06.728 "model_number": "SPDK bdev Controller", 00:43:06.728 "namespaces": [ 00:43:06.728 { 00:43:06.728 "bdev_name": "Nvme0n1", 00:43:06.728 "name": "Nvme0n1", 00:43:06.728 "nguid": "BEC8A4D2D6C84E76956CCE4A209A1402", 00:43:06.728 "nsid": 1, 00:43:06.728 "uuid": "bec8a4d2-d6c8-4e76-956c-ce4a209a1402" 00:43:06.728 } 00:43:06.728 ], 00:43:06.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:43:06.728 "serial_number": "SPDK00000000000001", 00:43:06.728 "subtype": "NVMe" 00:43:06.728 } 00:43:06.728 ] 00:43:06.728 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:06.728 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:43:06.728 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:43:06.728 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:06.987 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:43:06.987 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:06.987 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:43:06.987 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:43:07.246 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:43:07.246 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:43:07.246 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:43:07.246 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:07.246 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:07.246 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.246 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:07.246 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:43:07.246 11:26:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:43:07.246 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:43:07.246 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:43:07.246 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:43:07.246 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:43:07.246 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:43:07.246 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:43:07.246 rmmod nvme_tcp 00:43:07.246 rmmod nvme_fabrics 00:43:07.246 rmmod nvme_keyring 00:43:07.504 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:43:07.505 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:43:07.505 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:43:07.505 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 108637 ']' 00:43:07.505 11:26:31 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 108637 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 108637 ']' 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 108637 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108637 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:07.505 killing process with pid 108637 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108637' 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 108637 00:43:07.505 11:26:31 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 108637 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@254 -- # local dev 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # remove_target_ns 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:07.764 11:26:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:07.764 11:26:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # delete_main_bridge 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # continue 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # continue 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/setup.sh@274 -- # iptr 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-save 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-restore 00:43:07.764 11:26:32 nvmf_identify_passthru -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:43:07.764 00:43:07.764 real 0m4.039s 00:43:07.764 user 0m9.439s 00:43:07.764 sys 0m1.272s 00:43:07.764 11:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:07.764 11:26:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:43:07.764 ************************************ 00:43:07.764 END TEST nvmf_identify_passthru 00:43:07.764 ************************************ 00:43:08.023 11:26:32 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:43:08.023 11:26:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:08.023 11:26:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:08.023 11:26:32 -- common/autotest_common.sh@10 -- # set +x 00:43:08.023 ************************************ 00:43:08.023 START TEST nvmf_dif 00:43:08.023 ************************************ 00:43:08.023 11:26:32 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:43:08.023 * Looking for test storage... 00:43:08.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:08.023 11:26:32 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:08.023 11:26:32 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:08.023 11:26:32 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:43:08.023 11:26:32 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:08.023 11:26:32 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:08.023 11:26:32 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:08.023 11:26:32 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:08.023 11:26:32 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:43:08.023 11:26:32 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:43:08.023 11:26:32 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:43:08.023 11:26:32 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:08.024 11:26:32 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:43:08.024 11:26:32 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:08.024 11:26:32 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.024 --rc genhtml_branch_coverage=1 00:43:08.024 --rc genhtml_function_coverage=1 00:43:08.024 --rc genhtml_legend=1 00:43:08.024 --rc geninfo_all_blocks=1 00:43:08.024 --rc geninfo_unexecuted_blocks=1 00:43:08.024 00:43:08.024 ' 00:43:08.024 11:26:32 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.024 --rc genhtml_branch_coverage=1 00:43:08.024 --rc genhtml_function_coverage=1 00:43:08.024 --rc genhtml_legend=1 00:43:08.024 --rc geninfo_all_blocks=1 00:43:08.024 --rc geninfo_unexecuted_blocks=1 00:43:08.024 00:43:08.024 ' 00:43:08.024 11:26:32 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.024 --rc genhtml_branch_coverage=1 00:43:08.024 --rc genhtml_function_coverage=1 00:43:08.024 --rc genhtml_legend=1 00:43:08.024 --rc geninfo_all_blocks=1 00:43:08.024 --rc geninfo_unexecuted_blocks=1 00:43:08.024 00:43:08.024 ' 00:43:08.024 11:26:32 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.024 --rc genhtml_branch_coverage=1 00:43:08.024 --rc genhtml_function_coverage=1 00:43:08.024 --rc genhtml_legend=1 00:43:08.024 --rc geninfo_all_blocks=1 00:43:08.024 --rc geninfo_unexecuted_blocks=1 00:43:08.024 00:43:08.024 ' 00:43:08.024 11:26:32 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:08.024 11:26:32 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:43:08.283 11:26:32 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:43:08.283 11:26:32 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:43:08.283 11:26:32 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:08.283 11:26:32 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:43:08.283 11:26:32 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:43:08.283 11:26:32 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:08.283 11:26:32 nvmf_dif -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:08.283 11:26:32 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:43:08.283 11:26:32 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:08.283 11:26:32 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:08.283 11:26:32 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:08.283 11:26:32 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.283 11:26:32 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.284 11:26:32 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.284 11:26:32 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:43:08.284 11:26:32 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:43:08.284 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:43:08.284 11:26:32 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:43:08.284 11:26:32 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:43:08.284 11:26:32 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:43:08.284 11:26:32 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:43:08.284 11:26:32 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:08.284 11:26:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:08.284 11:26:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@280 -- # nvmf_veth_init 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@223 -- # create_target_ns 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@224 -- # create_main_bridge 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@105 -- # delete_main_bridge 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:08.284 11:26:32 nvmf_dif -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator0 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target0 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0 up 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target0_br 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target0 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:43:08.284 10.0.0.1 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:43:08.284 11:26:32 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:43:08.285 10.0.0.2 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator0 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target0_br 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:43:08.285 11:26:32 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:43:08.285 11:26:32 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator1 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target1 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1 up 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target1_br 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target1 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:43:08.544 11:26:32 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772163 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:43:08.544 10.0.0.3 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772164 00:43:08.544 11:26:33 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:43:08.545 10.0.0.4 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target1_br 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:43:08.545 11:26:33 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 2 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:43:08.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:08.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:43:08.545 00:43:08.545 --- 10.0.0.1 ping statistics --- 00:43:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.545 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:43:08.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:08.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:43:08.545 00:43:08.545 --- 10.0.0.2 ping statistics --- 00:43:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.545 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:43:08.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:08.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:43:08.545 00:43:08.545 --- 10.0.0.3 ping statistics --- 00:43:08.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.545 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:43:08.545 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:43:08.804 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:08.804 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:43:08.804 00:43:08.804 --- 10.0.0.4 ping statistics --- 00:43:08.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:08.804 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:43:08.804 11:26:33 nvmf_dif -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:08.804 11:26:33 nvmf_dif -- nvmf/common.sh@281 -- # return 0 00:43:08.804 11:26:33 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:43:08.804 11:26:33 nvmf_dif -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:09.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:43:09.063 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:43:09.063 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:43:09.063 11:26:33 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:43:09.063 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:43:09.322 11:26:33 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:43:09.323 11:26:33 nvmf_dif -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:43:09.323 ' 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:43:09.323 11:26:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:43:09.323 11:26:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=109041 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 109041 00:43:09.323 11:26:33 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 109041 ']' 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:09.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:09.323 11:26:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:09.323 [2024-12-05 11:26:33.883285] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:43:09.323 [2024-12-05 11:26:33.883407] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:09.581 [2024-12-05 11:26:34.043379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:09.581 [2024-12-05 11:26:34.134264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:09.581 [2024-12-05 11:26:34.134345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:09.581 [2024-12-05 11:26:34.134360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:09.581 [2024-12-05 11:26:34.134374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:09.581 [2024-12-05 11:26:34.134385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:09.581 [2024-12-05 11:26:34.134887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:43:10.529 11:26:34 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:10.529 11:26:34 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:10.529 11:26:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:43:10.529 11:26:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:10.529 [2024-12-05 11:26:34.980338] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.529 11:26:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:43:10.529 11:26:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:10.530 11:26:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:10.530 11:26:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:10.530 ************************************ 00:43:10.530 START TEST fio_dif_1_default 00:43:10.530 ************************************ 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.530 11:26:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:10.530 bdev_null0 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:10.530 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:10.531 [2024-12-05 11:26:35.024509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:10.531 { 00:43:10.531 "params": { 00:43:10.531 "name": "Nvme$subsystem", 00:43:10.531 "trtype": "$TEST_TRANSPORT", 00:43:10.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:10.531 "adrfam": "ipv4", 00:43:10.531 "trsvcid": "$NVMF_PORT", 00:43:10.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:10.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:10.531 "hdgst": ${hdgst:-false}, 00:43:10.531 "ddgst": ${ddgst:-false} 00:43:10.531 }, 00:43:10.531 "method": "bdev_nvme_attach_controller" 00:43:10.531 } 00:43:10.531 EOF 00:43:10.531 )") 00:43:10.531 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:43:10.532 11:26:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:10.532 "params": { 00:43:10.532 "name": "Nvme0", 00:43:10.532 "trtype": "tcp", 00:43:10.533 "traddr": "10.0.0.2", 00:43:10.533 "adrfam": "ipv4", 00:43:10.533 "trsvcid": "4420", 00:43:10.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:10.533 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:10.533 "hdgst": false, 00:43:10.533 "ddgst": false 00:43:10.533 }, 00:43:10.533 "method": "bdev_nvme_attach_controller" 00:43:10.533 }' 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:10.533 11:26:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.796 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:10.796 fio-3.35 00:43:10.796 Starting 1 thread 00:43:23.001 00:43:23.001 filename0: (groupid=0, jobs=1): err= 0: pid=109120: Thu Dec 5 11:26:45 2024 00:43:23.001 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.8MiB/10007msec) 00:43:23.001 slat (nsec): min=5997, max=60572, avg=7342.59, stdev=3904.51 00:43:23.001 clat (usec): min=336, max=43694, avg=7479.41, stdev=15340.42 00:43:23.001 lat (usec): min=342, max=43702, avg=7486.75, stdev=15340.39 00:43:23.001 clat percentiles (usec): 00:43:23.001 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 388], 00:43:23.001 | 30.00th=[ 404], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 478], 00:43:23.001 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[40633], 95.00th=[41157], 00:43:23.001 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[43779], 00:43:23.001 | 99.99th=[43779] 00:43:23.001 bw ( KiB/s): min= 832, max= 6112, per=100.00%, avg=2201.26, stdev=1404.42, samples=19 00:43:23.001 iops : min= 208, max= 1528, avg=550.32, stdev=351.11, samples=19 00:43:23.001 lat (usec) : 500=64.39%, 750=17.99% 00:43:23.001 lat (msec) : 2=0.22%, 50=17.39% 00:43:23.001 cpu : usr=83.43%, sys=15.88%, ctx=63, majf=0, minf=0 00:43:23.001 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:23.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.001 issued rwts: total=5336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:23.001 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:23.001 00:43:23.001 Run status group 0 (all jobs): 00:43:23.001 READ: bw=2133KiB/s (2184kB/s), 2133KiB/s-2133KiB/s (2184kB/s-2184kB/s), io=20.8MiB (21.9MB), run=10007-10007msec 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.001 00:43:23.001 real 0m11.101s 00:43:23.001 user 0m9.033s 00:43:23.001 sys 0m1.906s 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:43:23.001 ************************************ 00:43:23.001 END TEST fio_dif_1_default 00:43:23.001 ************************************ 00:43:23.001 11:26:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:43:23.001 11:26:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:23.001 11:26:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:23.001 11:26:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:23.001 ************************************ 00:43:23.001 START TEST fio_dif_1_multi_subsystems 00:43:23.001 ************************************ 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.001 bdev_null0 00:43:23.001 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.002 [2024-12-05 11:26:46.183785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.002 bdev_null1 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:23.002 { 00:43:23.002 "params": { 00:43:23.002 "name": "Nvme$subsystem", 00:43:23.002 "trtype": "$TEST_TRANSPORT", 00:43:23.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:23.002 "adrfam": "ipv4", 00:43:23.002 "trsvcid": "$NVMF_PORT", 00:43:23.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:23.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:23.002 "hdgst": ${hdgst:-false}, 00:43:23.002 "ddgst": ${ddgst:-false} 00:43:23.002 }, 00:43:23.002 "method": "bdev_nvme_attach_controller" 00:43:23.002 } 00:43:23.002 EOF 00:43:23.002 )") 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:23.002 { 00:43:23.002 "params": { 00:43:23.002 "name": "Nvme$subsystem", 00:43:23.002 "trtype": "$TEST_TRANSPORT", 00:43:23.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:23.002 "adrfam": "ipv4", 00:43:23.002 "trsvcid": "$NVMF_PORT", 00:43:23.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:23.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:23.002 "hdgst": ${hdgst:-false}, 00:43:23.002 "ddgst": ${ddgst:-false} 00:43:23.002 }, 00:43:23.002 "method": "bdev_nvme_attach_controller" 00:43:23.002 } 00:43:23.002 EOF 00:43:23.002 )") 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:23.002 "params": { 00:43:23.002 "name": "Nvme0", 00:43:23.002 "trtype": "tcp", 00:43:23.002 "traddr": "10.0.0.2", 00:43:23.002 "adrfam": "ipv4", 00:43:23.002 "trsvcid": "4420", 00:43:23.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:23.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:23.002 "hdgst": false, 00:43:23.002 "ddgst": false 00:43:23.002 }, 00:43:23.002 "method": "bdev_nvme_attach_controller" 00:43:23.002 },{ 00:43:23.002 "params": { 00:43:23.002 "name": "Nvme1", 00:43:23.002 "trtype": "tcp", 00:43:23.002 "traddr": "10.0.0.2", 00:43:23.002 "adrfam": "ipv4", 00:43:23.002 "trsvcid": "4420", 00:43:23.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:23.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:23.002 "hdgst": false, 00:43:23.002 "ddgst": false 00:43:23.002 }, 00:43:23.002 "method": "bdev_nvme_attach_controller" 00:43:23.002 }' 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:23.002 11:26:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:23.002 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:23.002 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:43:23.002 fio-3.35 00:43:23.002 Starting 2 threads 00:43:32.979 00:43:32.979 filename0: (groupid=0, jobs=1): err= 0: pid=109280: Thu Dec 5 11:26:57 2024 00:43:32.979 read: IOPS=205, BW=824KiB/s (843kB/s)(8256KiB/10025msec) 00:43:32.979 slat (nsec): min=4522, max=41455, avg=8360.16, stdev=4307.46 00:43:32.979 clat (usec): min=368, max=41445, avg=19402.65, stdev=20176.30 00:43:32.979 lat (usec): min=374, max=41456, avg=19411.01, stdev=20176.25 00:43:32.979 clat percentiles (usec): 00:43:32.979 | 1.00th=[ 379], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 404], 00:43:32.979 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 693], 60.00th=[40633], 00:43:32.979 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:32.979 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:43:32.979 | 99.99th=[41681] 00:43:32.979 bw ( KiB/s): min= 640, max= 1120, per=51.32%, avg=824.00, stdev=146.41, samples=20 00:43:32.979 iops : min= 160, max= 280, avg=206.00, stdev=36.60, samples=20 00:43:32.979 lat (usec) : 500=47.87%, 750=4.84%, 1000=0.19% 00:43:32.979 lat (msec) : 4=0.19%, 50=46.90% 00:43:32.979 cpu : usr=91.50%, sys=7.83%, ctx=44, majf=0, minf=0 00:43:32.979 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.979 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.979 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:32.979 filename1: (groupid=0, jobs=1): err= 0: pid=109281: Thu Dec 5 11:26:57 2024 00:43:32.979 read: IOPS=195, BW=783KiB/s (802kB/s)(7840KiB/10013msec) 00:43:32.979 slat (nsec): min=5957, max=33382, avg=8329.57, stdev=4312.94 00:43:32.979 clat (usec): min=359, max=41658, avg=20408.78, stdev=20212.41 00:43:32.979 lat (usec): min=365, max=41683, avg=20417.11, stdev=20212.17 00:43:32.979 clat percentiles (usec): 00:43:32.979 | 1.00th=[ 379], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 408], 00:43:32.979 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 742], 60.00th=[40633], 00:43:32.979 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:32.979 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:32.979 | 99.99th=[41681] 00:43:32.979 bw ( KiB/s): min= 608, max= 993, per=48.70%, avg=782.45, stdev=111.67, samples=20 00:43:32.979 iops : min= 152, max= 248, avg=195.60, stdev=27.89, samples=20 00:43:32.979 lat (usec) : 500=45.00%, 750=5.05%, 1000=0.36% 00:43:32.979 lat (msec) : 4=0.20%, 50=49.39% 00:43:32.979 cpu : usr=92.11%, sys=7.45%, ctx=16, majf=0, minf=9 00:43:32.979 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.979 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.979 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:32.979 00:43:32.979 Run status group 0 (all jobs): 00:43:32.979 READ: bw=1606KiB/s (1644kB/s), 783KiB/s-824KiB/s (802kB/s-843kB/s), io=15.7MiB (16.5MB), run=10013-10025msec 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.979 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 00:43:32.980 real 0m11.235s 00:43:32.980 user 0m19.225s 00:43:32.980 sys 0m1.839s 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 ************************************ 00:43:32.980 END TEST fio_dif_1_multi_subsystems 00:43:32.980 ************************************ 00:43:32.980 11:26:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:32.980 11:26:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:32.980 11:26:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 ************************************ 00:43:32.980 START TEST fio_dif_rand_params 00:43:32.980 ************************************ 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 bdev_null0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:32.980 [2024-12-05 11:26:57.493413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:32.980 { 00:43:32.980 "params": { 00:43:32.980 "name": "Nvme$subsystem", 00:43:32.980 "trtype": "$TEST_TRANSPORT", 00:43:32.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:32.980 "adrfam": "ipv4", 00:43:32.980 "trsvcid": "$NVMF_PORT", 00:43:32.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:32.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:32.980 "hdgst": ${hdgst:-false}, 00:43:32.980 "ddgst": ${ddgst:-false} 00:43:32.980 }, 00:43:32.980 "method": "bdev_nvme_attach_controller" 00:43:32.980 } 00:43:32.980 EOF 00:43:32.980 )") 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:32.980 "params": { 00:43:32.980 "name": "Nvme0", 00:43:32.980 "trtype": "tcp", 00:43:32.980 "traddr": "10.0.0.2", 00:43:32.980 "adrfam": "ipv4", 00:43:32.980 "trsvcid": "4420", 00:43:32.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:32.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:32.980 "hdgst": false, 00:43:32.980 "ddgst": false 00:43:32.980 }, 00:43:32.980 "method": "bdev_nvme_attach_controller" 00:43:32.980 }' 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:32.980 11:26:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:33.238 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:33.238 ... 00:43:33.238 fio-3.35 00:43:33.238 Starting 3 threads 00:43:39.814 00:43:39.814 filename0: (groupid=0, jobs=1): err= 0: pid=109437: Thu Dec 5 11:27:03 2024 00:43:39.814 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(168MiB/5004msec) 00:43:39.814 slat (nsec): min=6116, max=40775, avg=11350.07, stdev=4621.60 00:43:39.814 clat (usec): min=4689, max=53220, avg=11181.66, stdev=5289.37 00:43:39.814 lat (usec): min=4695, max=53241, avg=11193.01, stdev=5289.59 00:43:39.814 clat percentiles (usec): 00:43:39.814 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 9896], 00:43:39.814 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:43:39.814 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12649], 00:43:39.814 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:43:39.814 | 99.99th=[53216] 00:43:39.814 bw ( KiB/s): min=29696, max=43776, per=33.53%, avg=34531.56, stdev=4599.89, samples=9 00:43:39.814 iops : min= 232, max= 342, avg=269.78, stdev=35.94, samples=9 00:43:39.814 lat (msec) : 10=21.72%, 20=76.72%, 50=0.22%, 100=1.34% 00:43:39.814 cpu : usr=89.07%, sys=9.51%, ctx=13, majf=0, minf=0 00:43:39.814 IO depths : 1=8.1%, 2=91.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.814 issued rwts: total=1340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.814 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:39.814 filename0: (groupid=0, jobs=1): err= 0: pid=109438: Thu Dec 5 11:27:03 2024 00:43:39.814 read: IOPS=239, BW=30.0MiB/s (31.4MB/s)(150MiB/5004msec) 00:43:39.814 slat (usec): min=6, max=292, avg=12.08, stdev= 8.86 00:43:39.814 clat (usec): min=3742, max=16892, avg=12502.20, stdev=2322.78 00:43:39.814 lat (usec): min=3749, max=16906, avg=12514.29, stdev=2323.68 00:43:39.814 clat percentiles (usec): 00:43:39.814 | 1.00th=[ 6325], 5.00th=[ 7898], 10.00th=[ 8291], 20.00th=[11076], 00:43:39.814 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[13566], 00:43:39.814 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:43:39.814 | 99.00th=[16057], 99.50th=[16319], 99.90th=[16712], 99.95th=[16909], 00:43:39.814 | 99.99th=[16909] 00:43:39.814 bw ( KiB/s): min=26112, max=37120, per=29.80%, avg=30691.56, stdev=3704.14, samples=9 00:43:39.814 iops : min= 204, max= 290, avg=239.78, stdev=28.94, samples=9 00:43:39.814 lat (msec) : 4=0.33%, 10=18.27%, 20=81.40% 00:43:39.814 cpu : usr=89.45%, sys=9.05%, ctx=67, majf=0, minf=0 00:43:39.814 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.814 issued rwts: total=1199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.814 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:39.814 filename0: (groupid=0, jobs=1): err= 0: pid=109439: Thu Dec 5 11:27:03 2024 00:43:39.814 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(186MiB/5003msec) 00:43:39.814 slat (nsec): min=5319, max=41286, avg=11767.83, stdev=3925.93 00:43:39.814 clat (usec): min=4580, max=52067, avg=10077.70, stdev=5573.09 00:43:39.814 lat (usec): min=4587, max=52091, avg=10089.47, stdev=5573.53 00:43:39.814 clat percentiles (usec): 00:43:39.814 | 1.00th=[ 5932], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 8717], 00:43:39.814 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:43:39.814 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:43:39.814 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:43:39.814 | 99.99th=[52167] 00:43:39.814 bw ( KiB/s): min=23040, max=46080, per=36.54%, avg=37632.00, stdev=7275.76, samples=9 00:43:39.814 iops : min= 180, max= 360, avg=294.00, stdev=56.84, samples=9 00:43:39.814 lat (msec) : 10=72.76%, 20=25.42%, 50=0.74%, 100=1.08% 00:43:39.814 cpu : usr=88.80%, sys=9.76%, ctx=46, majf=0, minf=0 00:43:39.814 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:39.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:39.814 issued rwts: total=1487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:39.814 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:39.814 00:43:39.814 Run status group 0 (all jobs): 00:43:39.814 READ: bw=101MiB/s (105MB/s), 30.0MiB/s-37.2MiB/s (31.4MB/s-39.0MB/s), io=503MiB (528MB), run=5003-5004msec 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 bdev_null0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 [2024-12-05 11:27:03.577254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 bdev_null1 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.814 bdev_null2 00:43:39.814 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:39.815 { 00:43:39.815 "params": { 00:43:39.815 "name": "Nvme$subsystem", 00:43:39.815 "trtype": "$TEST_TRANSPORT", 00:43:39.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:39.815 "adrfam": "ipv4", 00:43:39.815 "trsvcid": "$NVMF_PORT", 00:43:39.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:39.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:39.815 "hdgst": ${hdgst:-false}, 00:43:39.815 "ddgst": ${ddgst:-false} 00:43:39.815 }, 00:43:39.815 "method": "bdev_nvme_attach_controller" 00:43:39.815 } 00:43:39.815 EOF 00:43:39.815 )") 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:39.815 { 00:43:39.815 "params": { 00:43:39.815 "name": "Nvme$subsystem", 00:43:39.815 "trtype": "$TEST_TRANSPORT", 00:43:39.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:39.815 "adrfam": "ipv4", 00:43:39.815 "trsvcid": "$NVMF_PORT", 00:43:39.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:39.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:39.815 "hdgst": ${hdgst:-false}, 00:43:39.815 "ddgst": ${ddgst:-false} 00:43:39.815 }, 00:43:39.815 "method": "bdev_nvme_attach_controller" 00:43:39.815 } 00:43:39.815 EOF 00:43:39.815 )") 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:39.815 { 00:43:39.815 "params": { 00:43:39.815 "name": "Nvme$subsystem", 00:43:39.815 "trtype": "$TEST_TRANSPORT", 00:43:39.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:39.815 "adrfam": "ipv4", 00:43:39.815 "trsvcid": "$NVMF_PORT", 00:43:39.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:39.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:39.815 "hdgst": ${hdgst:-false}, 00:43:39.815 "ddgst": ${ddgst:-false} 00:43:39.815 }, 00:43:39.815 "method": "bdev_nvme_attach_controller" 00:43:39.815 } 00:43:39.815 EOF 00:43:39.815 )") 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:39.815 "params": { 00:43:39.815 "name": "Nvme0", 00:43:39.815 "trtype": "tcp", 00:43:39.815 "traddr": "10.0.0.2", 00:43:39.815 "adrfam": "ipv4", 00:43:39.815 "trsvcid": "4420", 00:43:39.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:39.815 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:39.815 "hdgst": false, 00:43:39.815 "ddgst": false 00:43:39.815 }, 00:43:39.815 "method": "bdev_nvme_attach_controller" 00:43:39.815 },{ 00:43:39.815 "params": { 00:43:39.815 "name": "Nvme1", 00:43:39.815 "trtype": "tcp", 00:43:39.815 "traddr": "10.0.0.2", 00:43:39.815 "adrfam": "ipv4", 00:43:39.815 "trsvcid": "4420", 00:43:39.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:39.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:39.815 "hdgst": false, 00:43:39.815 "ddgst": false 00:43:39.815 }, 00:43:39.815 "method": "bdev_nvme_attach_controller" 00:43:39.815 },{ 00:43:39.815 "params": { 00:43:39.815 "name": "Nvme2", 00:43:39.815 "trtype": "tcp", 00:43:39.815 "traddr": "10.0.0.2", 00:43:39.815 "adrfam": "ipv4", 00:43:39.815 "trsvcid": "4420", 00:43:39.815 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:39.815 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:39.815 "hdgst": false, 00:43:39.815 "ddgst": false 00:43:39.815 }, 00:43:39.815 "method": "bdev_nvme_attach_controller" 00:43:39.815 }' 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:39.815 11:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:39.815 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:39.815 ... 00:43:39.815 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:39.815 ... 00:43:39.815 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:39.815 ... 00:43:39.815 fio-3.35 00:43:39.815 Starting 24 threads 00:43:52.051 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109539: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=255, BW=1023KiB/s (1048kB/s)(10.0MiB/10034msec) 00:43:52.051 slat (usec): min=6, max=8021, avg=19.37, stdev=251.77 00:43:52.051 clat (msec): min=20, max=132, avg=62.39, stdev=20.77 00:43:52.051 lat (msec): min=20, max=132, avg=62.41, stdev=20.77 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 45], 00:43:52.051 | 30.00th=[ 48], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 67], 00:43:52.051 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 104], 00:43:52.051 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 133], 99.95th=[ 133], 00:43:52.051 | 99.99th=[ 133] 00:43:52.051 bw ( KiB/s): min= 688, max= 1760, per=4.37%, avg=1020.40, stdev=242.42, samples=20 00:43:52.051 iops : min= 172, max= 440, avg=255.10, stdev=60.61, samples=20 00:43:52.051 lat (msec) : 50=32.10%, 100=62.10%, 250=5.80% 00:43:52.051 cpu : usr=35.61%, sys=1.41%, ctx=1023, majf=0, minf=9 00:43:52.051 IO depths : 1=1.1%, 2=2.4%, 4=9.4%, 8=74.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:43:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 issued rwts: total=2567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109540: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=290, BW=1163KiB/s (1191kB/s)(11.4MiB/10042msec) 00:43:52.051 slat (usec): min=6, max=7041, avg=16.40, stdev=159.96 00:43:52.051 clat (msec): min=3, max=173, avg=54.88, stdev=25.30 00:43:52.051 lat (msec): min=3, max=173, avg=54.90, stdev=25.30 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 39], 00:43:52.051 | 30.00th=[ 43], 40.00th=[ 47], 50.00th=[ 51], 60.00th=[ 57], 00:43:52.051 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 87], 95.00th=[ 103], 00:43:52.051 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:43:52.051 | 99.99th=[ 174] 00:43:52.051 bw ( KiB/s): min= 520, max= 3072, per=4.96%, avg=1160.90, stdev=505.95, samples=20 00:43:52.051 iops : min= 130, max= 768, avg=290.20, stdev=126.47, samples=20 00:43:52.051 lat (msec) : 4=0.92%, 10=2.91%, 20=3.84%, 50=41.61%, 100=44.42% 00:43:52.051 lat (msec) : 250=6.30% 00:43:52.051 cpu : usr=45.78%, sys=2.01%, ctx=1104, majf=0, minf=0 00:43:52.051 IO depths : 1=2.0%, 2=4.5%, 4=13.5%, 8=68.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:43:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 issued rwts: total=2920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109541: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=279, BW=1117KiB/s (1144kB/s)(11.0MiB/10049msec) 00:43:52.051 slat (usec): min=3, max=7022, avg=16.82, stdev=175.98 00:43:52.051 clat (msec): min=3, max=143, avg=57.12, stdev=21.53 00:43:52.051 lat (msec): min=3, max=143, avg=57.13, stdev=21.53 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 33], 20.00th=[ 43], 00:43:52.051 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 62], 00:43:52.051 | 70.00th=[ 67], 80.00th=[ 74], 90.00th=[ 83], 95.00th=[ 93], 00:43:52.051 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 130], 99.95th=[ 130], 00:43:52.051 | 99.99th=[ 144] 00:43:52.051 bw ( KiB/s): min= 736, max= 2616, per=4.78%, avg=1116.40, stdev=385.34, samples=20 00:43:52.051 iops : min= 184, max= 654, avg=279.10, stdev=96.34, samples=20 00:43:52.051 lat (msec) : 4=0.36%, 10=2.49%, 20=2.78%, 50=33.10%, 100=57.29% 00:43:52.051 lat (msec) : 250=3.99% 00:43:52.051 cpu : usr=36.14%, sys=1.64%, ctx=1620, majf=0, minf=9 00:43:52.051 IO depths : 1=0.9%, 2=2.1%, 4=9.2%, 8=75.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:43:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 issued rwts: total=2807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109542: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=222, BW=890KiB/s (912kB/s)(8916KiB/10014msec) 00:43:52.051 slat (usec): min=4, max=8041, avg=29.28, stdev=369.88 00:43:52.051 clat (msec): min=21, max=186, avg=71.62, stdev=24.09 00:43:52.051 lat (msec): min=21, max=186, avg=71.65, stdev=24.09 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 45], 20.00th=[ 52], 00:43:52.051 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:43:52.051 | 70.00th=[ 84], 80.00th=[ 89], 90.00th=[ 102], 95.00th=[ 114], 00:43:52.051 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 186], 99.95th=[ 186], 00:43:52.051 | 99.99th=[ 186] 00:43:52.051 bw ( KiB/s): min= 640, max= 1376, per=3.77%, avg=880.84, stdev=178.57, samples=19 00:43:52.051 iops : min= 160, max= 344, avg=220.21, stdev=44.64, samples=19 00:43:52.051 lat (msec) : 50=18.48%, 100=70.48%, 250=11.04% 00:43:52.051 cpu : usr=32.42%, sys=1.23%, ctx=904, majf=0, minf=9 00:43:52.051 IO depths : 1=1.7%, 2=3.6%, 4=11.3%, 8=72.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:43:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109543: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=233, BW=935KiB/s (957kB/s)(9356KiB/10007msec) 00:43:52.051 slat (usec): min=4, max=8001, avg=18.60, stdev=219.84 00:43:52.051 clat (msec): min=6, max=165, avg=68.34, stdev=25.38 00:43:52.051 lat (msec): min=6, max=165, avg=68.35, stdev=25.37 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:43:52.051 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:43:52.051 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 115], 00:43:52.051 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 167], 00:43:52.051 | 99.99th=[ 167] 00:43:52.051 bw ( KiB/s): min= 512, max= 1280, per=3.96%, avg=925.89, stdev=187.20, samples=19 00:43:52.051 iops : min= 128, max= 320, avg=231.47, stdev=46.80, samples=19 00:43:52.051 lat (msec) : 10=0.26%, 20=0.43%, 50=24.58%, 100=63.40%, 250=11.33% 00:43:52.051 cpu : usr=31.91%, sys=1.32%, ctx=861, majf=0, minf=9 00:43:52.051 IO depths : 1=1.6%, 2=3.2%, 4=10.7%, 8=72.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:43:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 issued rwts: total=2339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109544: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=261, BW=1048KiB/s (1073kB/s)(10.3MiB/10026msec) 00:43:52.051 slat (usec): min=6, max=4038, avg=13.48, stdev=78.99 00:43:52.051 clat (msec): min=15, max=142, avg=60.91, stdev=21.24 00:43:52.051 lat (msec): min=15, max=142, avg=60.93, stdev=21.24 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 43], 00:43:52.051 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:43:52.051 | 70.00th=[ 69], 80.00th=[ 79], 90.00th=[ 85], 95.00th=[ 105], 00:43:52.051 | 99.00th=[ 115], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 144], 00:43:52.051 | 99.99th=[ 144] 00:43:52.051 bw ( KiB/s): min= 728, max= 1715, per=4.49%, avg=1048.15, stdev=237.73, samples=20 00:43:52.051 iops : min= 182, max= 428, avg=262.00, stdev=59.32, samples=20 00:43:52.051 lat (msec) : 20=1.83%, 50=32.60%, 100=59.18%, 250=6.40% 00:43:52.051 cpu : usr=37.09%, sys=1.55%, ctx=1233, majf=0, minf=9 00:43:52.051 IO depths : 1=1.3%, 2=2.8%, 4=10.5%, 8=73.3%, 16=12.1%, 32=0.0%, >=64=0.0% 00:43:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 issued rwts: total=2626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109545: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=238, BW=956KiB/s (979kB/s)(9568KiB/10012msec) 00:43:52.051 slat (usec): min=3, max=8044, avg=21.71, stdev=232.08 00:43:52.051 clat (msec): min=15, max=160, avg=66.82, stdev=22.26 00:43:52.051 lat (msec): min=15, max=160, avg=66.84, stdev=22.27 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 48], 00:43:52.051 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:43:52.051 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 95], 95.00th=[ 108], 00:43:52.051 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 161], 00:43:52.051 | 99.99th=[ 161] 00:43:52.051 bw ( KiB/s): min= 688, max= 1408, per=4.10%, avg=960.00, stdev=185.71, samples=19 00:43:52.051 iops : min= 172, max= 352, avg=240.00, stdev=46.43, samples=19 00:43:52.051 lat (msec) : 20=1.55%, 50=21.32%, 100=68.90%, 250=8.24% 00:43:52.051 cpu : usr=38.87%, sys=1.48%, ctx=1382, majf=0, minf=9 00:43:52.051 IO depths : 1=2.3%, 2=4.8%, 4=13.8%, 8=68.3%, 16=10.8%, 32=0.0%, >=64=0.0% 00:43:52.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.051 issued rwts: total=2392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.051 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.051 filename0: (groupid=0, jobs=1): err= 0: pid=109546: Thu Dec 5 11:27:14 2024 00:43:52.051 read: IOPS=294, BW=1178KiB/s (1206kB/s)(11.6MiB/10056msec) 00:43:52.051 slat (usec): min=3, max=7004, avg=16.95, stdev=180.83 00:43:52.051 clat (msec): min=3, max=125, avg=54.13, stdev=21.38 00:43:52.051 lat (msec): min=3, max=125, avg=54.15, stdev=21.38 00:43:52.051 clat percentiles (msec): 00:43:52.051 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 21], 20.00th=[ 41], 00:43:52.051 | 30.00th=[ 45], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 61], 00:43:52.051 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 86], 00:43:52.051 | 99.00th=[ 106], 99.50th=[ 113], 99.90th=[ 126], 99.95th=[ 126], 00:43:52.051 | 99.99th=[ 126] 00:43:52.052 bw ( KiB/s): min= 784, max= 3176, per=5.04%, avg=1177.80, stdev=487.83, samples=20 00:43:52.052 iops : min= 196, max= 794, avg=294.40, stdev=121.96, samples=20 00:43:52.052 lat (msec) : 4=1.11%, 10=2.67%, 20=5.87%, 50=33.52%, 100=55.17% 00:43:52.052 lat (msec) : 250=1.65% 00:43:52.052 cpu : usr=42.97%, sys=1.71%, ctx=1218, majf=0, minf=0 00:43:52.052 IO depths : 1=1.2%, 2=2.5%, 4=9.5%, 8=74.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109547: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=220, BW=882KiB/s (903kB/s)(8816KiB/10001msec) 00:43:52.052 slat (usec): min=4, max=11022, avg=26.74, stdev=353.35 00:43:52.052 clat (usec): min=1728, max=189483, avg=72420.35, stdev=27641.48 00:43:52.052 lat (usec): min=1735, max=189491, avg=72447.09, stdev=27650.37 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 46], 20.00th=[ 57], 00:43:52.052 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 75], 00:43:52.052 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 123], 00:43:52.052 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 180], 99.95th=[ 190], 00:43:52.052 | 99.99th=[ 190] 00:43:52.052 bw ( KiB/s): min= 512, max= 1152, per=3.62%, avg=846.95, stdev=146.32, samples=19 00:43:52.052 iops : min= 128, max= 288, avg=211.68, stdev=36.56, samples=19 00:43:52.052 lat (msec) : 2=0.73%, 4=2.18%, 10=1.41%, 20=0.77%, 50=9.35% 00:43:52.052 lat (msec) : 100=71.37%, 250=14.20% 00:43:52.052 cpu : usr=33.12%, sys=1.46%, ctx=1142, majf=0, minf=9 00:43:52.052 IO depths : 1=2.9%, 2=6.7%, 4=17.6%, 8=62.7%, 16=10.1%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109548: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=217, BW=869KiB/s (890kB/s)(8696KiB/10007msec) 00:43:52.052 slat (usec): min=4, max=6948, avg=18.62, stdev=188.65 00:43:52.052 clat (msec): min=9, max=167, avg=73.49, stdev=23.79 00:43:52.052 lat (msec): min=9, max=167, avg=73.51, stdev=23.79 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 56], 00:43:52.052 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 73], 00:43:52.052 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 117], 00:43:52.052 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 169], 00:43:52.052 | 99.99th=[ 169] 00:43:52.052 bw ( KiB/s): min= 512, max= 1280, per=3.71%, avg=866.11, stdev=173.00, samples=19 00:43:52.052 iops : min= 128, max= 320, avg=216.53, stdev=43.25, samples=19 00:43:52.052 lat (msec) : 10=0.74%, 50=12.33%, 100=71.76%, 250=15.18% 00:43:52.052 cpu : usr=41.20%, sys=1.67%, ctx=1478, majf=0, minf=9 00:43:52.052 IO depths : 1=2.1%, 2=4.7%, 4=14.2%, 8=67.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109549: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10030msec) 00:43:52.052 slat (usec): min=5, max=8038, avg=14.67, stdev=158.13 00:43:52.052 clat (msec): min=10, max=116, avg=62.01, stdev=18.95 00:43:52.052 lat (msec): min=10, max=116, avg=62.02, stdev=18.95 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 47], 00:43:52.052 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 66], 00:43:52.052 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 95], 00:43:52.052 | 99.00th=[ 110], 99.50th=[ 111], 99.90th=[ 117], 99.95th=[ 117], 00:43:52.052 | 99.99th=[ 117] 00:43:52.052 bw ( KiB/s): min= 688, max= 1760, per=4.40%, avg=1029.60, stdev=211.31, samples=20 00:43:52.052 iops : min= 172, max= 440, avg=257.40, stdev=52.83, samples=20 00:43:52.052 lat (msec) : 20=1.24%, 50=27.21%, 100=67.67%, 250=3.88% 00:43:52.052 cpu : usr=31.96%, sys=1.34%, ctx=873, majf=0, minf=9 00:43:52.052 IO depths : 1=0.9%, 2=2.2%, 4=9.1%, 8=75.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109550: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=230, BW=922KiB/s (944kB/s)(9232KiB/10017msec) 00:43:52.052 slat (usec): min=4, max=8050, avg=25.03, stdev=307.23 00:43:52.052 clat (msec): min=23, max=188, avg=69.24, stdev=21.51 00:43:52.052 lat (msec): min=23, max=188, avg=69.27, stdev=21.51 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 51], 00:43:52.052 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:43:52.052 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 107], 00:43:52.052 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 188], 99.95th=[ 188], 00:43:52.052 | 99.99th=[ 188] 00:43:52.052 bw ( KiB/s): min= 640, max= 1280, per=3.94%, avg=920.40, stdev=162.14, samples=20 00:43:52.052 iops : min= 160, max= 320, avg=230.10, stdev=40.54, samples=20 00:43:52.052 lat (msec) : 50=19.28%, 100=72.10%, 250=8.62% 00:43:52.052 cpu : usr=33.98%, sys=1.51%, ctx=968, majf=0, minf=9 00:43:52.052 IO depths : 1=1.9%, 2=4.2%, 4=13.4%, 8=69.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109551: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=235, BW=944KiB/s (966kB/s)(9464KiB/10028msec) 00:43:52.052 slat (usec): min=5, max=7997, avg=15.41, stdev=164.29 00:43:52.052 clat (msec): min=12, max=172, avg=67.71, stdev=22.60 00:43:52.052 lat (msec): min=12, max=172, avg=67.72, stdev=22.60 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 48], 00:43:52.052 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:43:52.052 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:43:52.052 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 174], 99.95th=[ 174], 00:43:52.052 | 99.99th=[ 174] 00:43:52.052 bw ( KiB/s): min= 600, max= 1408, per=4.02%, avg=939.65, stdev=183.45, samples=20 00:43:52.052 iops : min= 150, max= 352, avg=234.90, stdev=45.86, samples=20 00:43:52.052 lat (msec) : 20=0.68%, 50=20.96%, 100=70.03%, 250=8.33% 00:43:52.052 cpu : usr=33.42%, sys=1.35%, ctx=948, majf=0, minf=9 00:43:52.052 IO depths : 1=1.0%, 2=2.3%, 4=10.7%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=89.7%, 8=5.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109552: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=218, BW=875KiB/s (896kB/s)(8756KiB/10012msec) 00:43:52.052 slat (usec): min=5, max=11021, avg=28.37, stdev=378.51 00:43:52.052 clat (msec): min=12, max=166, avg=72.98, stdev=24.87 00:43:52.052 lat (msec): min=12, max=166, avg=73.01, stdev=24.87 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 17], 5.00th=[ 35], 10.00th=[ 46], 20.00th=[ 56], 00:43:52.052 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 74], 00:43:52.052 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 118], 00:43:52.052 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 167], 99.95th=[ 167], 00:43:52.052 | 99.99th=[ 167] 00:43:52.052 bw ( KiB/s): min= 512, max= 1488, per=3.74%, avg=874.53, stdev=205.87, samples=19 00:43:52.052 iops : min= 128, max= 372, avg=218.63, stdev=51.47, samples=19 00:43:52.052 lat (msec) : 20=1.92%, 50=14.66%, 100=67.25%, 250=16.17% 00:43:52.052 cpu : usr=35.47%, sys=1.34%, ctx=999, majf=0, minf=9 00:43:52.052 IO depths : 1=2.0%, 2=5.2%, 4=16.4%, 8=65.6%, 16=10.9%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109553: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=216, BW=865KiB/s (886kB/s)(8652KiB/10005msec) 00:43:52.052 slat (usec): min=4, max=8032, avg=18.94, stdev=193.24 00:43:52.052 clat (msec): min=5, max=163, avg=73.84, stdev=23.81 00:43:52.052 lat (msec): min=5, max=163, avg=73.86, stdev=23.80 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 50], 20.00th=[ 57], 00:43:52.052 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 77], 00:43:52.052 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:43:52.052 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:43:52.052 | 99.99th=[ 165] 00:43:52.052 bw ( KiB/s): min= 512, max= 1152, per=3.65%, avg=852.84, stdev=167.34, samples=19 00:43:52.052 iops : min= 128, max= 288, avg=213.21, stdev=41.83, samples=19 00:43:52.052 lat (msec) : 10=1.06%, 20=0.74%, 50=9.52%, 100=72.31%, 250=16.37% 00:43:52.052 cpu : usr=44.21%, sys=1.32%, ctx=1282, majf=0, minf=9 00:43:52.052 IO depths : 1=3.4%, 2=7.5%, 4=18.4%, 8=61.1%, 16=9.6%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename1: (groupid=0, jobs=1): err= 0: pid=109554: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=220, BW=882KiB/s (903kB/s)(8820KiB/10003msec) 00:43:52.052 slat (usec): min=4, max=8028, avg=25.30, stdev=298.57 00:43:52.052 clat (msec): min=2, max=185, avg=72.38, stdev=26.98 00:43:52.052 lat (msec): min=2, max=185, avg=72.41, stdev=26.98 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 3], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 57], 00:43:52.052 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:43:52.052 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 113], 00:43:52.052 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 186], 99.95th=[ 186], 00:43:52.052 | 99.99th=[ 186] 00:43:52.052 bw ( KiB/s): min= 512, max= 1077, per=3.65%, avg=853.63, stdev=143.23, samples=19 00:43:52.052 iops : min= 128, max= 269, avg=213.37, stdev=35.78, samples=19 00:43:52.052 lat (msec) : 4=1.68%, 10=2.18%, 20=0.50%, 50=8.25%, 100=72.52% 00:43:52.052 lat (msec) : 250=14.88% 00:43:52.052 cpu : usr=41.40%, sys=1.42%, ctx=1159, majf=0, minf=9 00:43:52.052 IO depths : 1=3.0%, 2=6.6%, 4=17.1%, 8=63.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename2: (groupid=0, jobs=1): err= 0: pid=109555: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=243, BW=975KiB/s (999kB/s)(9764KiB/10010msec) 00:43:52.052 slat (usec): min=4, max=6035, avg=16.92, stdev=156.67 00:43:52.052 clat (msec): min=19, max=135, avg=65.48, stdev=21.17 00:43:52.052 lat (msec): min=19, max=136, avg=65.50, stdev=21.17 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 47], 00:43:52.052 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:43:52.052 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 108], 00:43:52.052 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:43:52.052 | 99.99th=[ 136] 00:43:52.052 bw ( KiB/s): min= 768, max= 1416, per=4.15%, avg=970.00, stdev=195.97, samples=20 00:43:52.052 iops : min= 192, max= 354, avg=242.50, stdev=48.99, samples=20 00:43:52.052 lat (msec) : 20=0.41%, 50=26.87%, 100=65.30%, 250=7.41% 00:43:52.052 cpu : usr=39.04%, sys=1.48%, ctx=1076, majf=0, minf=9 00:43:52.052 IO depths : 1=1.7%, 2=3.9%, 4=12.5%, 8=70.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename2: (groupid=0, jobs=1): err= 0: pid=109556: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=222, BW=889KiB/s (910kB/s)(8892KiB/10004msec) 00:43:52.052 slat (usec): min=4, max=8035, avg=29.04, stdev=371.02 00:43:52.052 clat (msec): min=8, max=151, avg=71.78, stdev=24.40 00:43:52.052 lat (msec): min=8, max=151, avg=71.81, stdev=24.40 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 13], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 58], 00:43:52.052 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 73], 00:43:52.052 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 107], 95.00th=[ 118], 00:43:52.052 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:43:52.052 | 99.99th=[ 153] 00:43:52.052 bw ( KiB/s): min= 512, max= 1384, per=3.75%, avg=877.68, stdev=177.65, samples=19 00:43:52.052 iops : min= 128, max= 346, avg=219.42, stdev=44.41, samples=19 00:43:52.052 lat (msec) : 10=0.54%, 20=0.90%, 50=15.34%, 100=69.05%, 250=14.17% 00:43:52.052 cpu : usr=31.99%, sys=1.22%, ctx=873, majf=0, minf=9 00:43:52.052 IO depths : 1=2.3%, 2=5.4%, 4=15.2%, 8=66.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=91.4%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename2: (groupid=0, jobs=1): err= 0: pid=109557: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=215, BW=864KiB/s (885kB/s)(8652KiB/10014msec) 00:43:52.052 slat (usec): min=4, max=6806, avg=17.46, stdev=174.67 00:43:52.052 clat (msec): min=14, max=183, avg=73.87, stdev=22.94 00:43:52.052 lat (msec): min=14, max=183, avg=73.89, stdev=22.94 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:43:52.052 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 75], 00:43:52.052 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 115], 00:43:52.052 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 184], 99.95th=[ 184], 00:43:52.052 | 99.99th=[ 184] 00:43:52.052 bw ( KiB/s): min= 552, max= 1208, per=3.69%, avg=863.58, stdev=156.64, samples=19 00:43:52.052 iops : min= 138, max= 302, avg=215.89, stdev=39.16, samples=19 00:43:52.052 lat (msec) : 20=0.74%, 50=12.21%, 100=71.84%, 250=15.21% 00:43:52.052 cpu : usr=39.07%, sys=1.56%, ctx=1174, majf=0, minf=9 00:43:52.052 IO depths : 1=3.0%, 2=6.5%, 4=16.9%, 8=63.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename2: (groupid=0, jobs=1): err= 0: pid=109558: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=245, BW=981KiB/s (1004kB/s)(9816KiB/10007msec) 00:43:52.052 slat (usec): min=4, max=4026, avg=14.78, stdev=95.70 00:43:52.052 clat (msec): min=15, max=152, avg=65.16, stdev=23.54 00:43:52.052 lat (msec): min=15, max=152, avg=65.17, stdev=23.54 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 46], 00:43:52.052 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 68], 00:43:52.052 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:43:52.052 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 153], 99.95th=[ 153], 00:43:52.052 | 99.99th=[ 153] 00:43:52.052 bw ( KiB/s): min= 480, max= 1328, per=4.22%, avg=986.11, stdev=224.58, samples=19 00:43:52.052 iops : min= 120, max= 332, avg=246.53, stdev=56.15, samples=19 00:43:52.052 lat (msec) : 20=0.65%, 50=30.32%, 100=61.94%, 250=7.09% 00:43:52.052 cpu : usr=37.67%, sys=1.58%, ctx=1450, majf=0, minf=9 00:43:52.052 IO depths : 1=0.5%, 2=1.1%, 4=5.8%, 8=78.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=89.2%, 8=7.3%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename2: (groupid=0, jobs=1): err= 0: pid=109559: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=254, BW=1018KiB/s (1042kB/s)(9.97MiB/10031msec) 00:43:52.052 slat (usec): min=5, max=7056, avg=21.36, stdev=211.70 00:43:52.052 clat (msec): min=18, max=180, avg=62.63, stdev=21.86 00:43:52.052 lat (msec): min=18, max=180, avg=62.65, stdev=21.86 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 44], 00:43:52.052 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 67], 00:43:52.052 | 70.00th=[ 71], 80.00th=[ 78], 90.00th=[ 87], 95.00th=[ 96], 00:43:52.052 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 182], 99.95th=[ 182], 00:43:52.052 | 99.99th=[ 182] 00:43:52.052 bw ( KiB/s): min= 768, max= 1569, per=4.34%, avg=1014.45, stdev=227.42, samples=20 00:43:52.052 iops : min= 192, max= 392, avg=253.60, stdev=56.82, samples=20 00:43:52.052 lat (msec) : 20=0.43%, 50=30.16%, 100=65.69%, 250=3.72% 00:43:52.052 cpu : usr=41.21%, sys=1.34%, ctx=1150, majf=0, minf=9 00:43:52.052 IO depths : 1=1.7%, 2=3.6%, 4=11.7%, 8=71.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:43:52.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.052 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.052 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.052 filename2: (groupid=0, jobs=1): err= 0: pid=109560: Thu Dec 5 11:27:14 2024 00:43:52.052 read: IOPS=248, BW=994KiB/s (1018kB/s)(9952KiB/10008msec) 00:43:52.052 slat (usec): min=5, max=6067, avg=14.24, stdev=121.63 00:43:52.052 clat (msec): min=13, max=144, avg=64.27, stdev=20.99 00:43:52.052 lat (msec): min=13, max=144, avg=64.28, stdev=20.99 00:43:52.052 clat percentiles (msec): 00:43:52.052 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:43:52.052 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 66], 00:43:52.052 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 106], 00:43:52.052 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 146], 00:43:52.052 | 99.99th=[ 146] 00:43:52.053 bw ( KiB/s): min= 688, max= 1280, per=4.22%, avg=986.95, stdev=178.13, samples=19 00:43:52.053 iops : min= 172, max= 320, avg=246.74, stdev=44.53, samples=19 00:43:52.053 lat (msec) : 20=0.64%, 50=25.24%, 100=67.64%, 250=6.47% 00:43:52.053 cpu : usr=37.26%, sys=1.40%, ctx=1175, majf=0, minf=9 00:43:52.053 IO depths : 1=0.4%, 2=0.9%, 4=7.1%, 8=77.9%, 16=13.7%, 32=0.0%, >=64=0.0% 00:43:52.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.053 complete : 0=0.0%, 4=89.2%, 8=6.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.053 issued rwts: total=2488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.053 filename2: (groupid=0, jobs=1): err= 0: pid=109561: Thu Dec 5 11:27:14 2024 00:43:52.053 read: IOPS=262, BW=1048KiB/s (1073kB/s)(10.3MiB/10034msec) 00:43:52.053 slat (usec): min=6, max=630, avg=11.69, stdev=14.18 00:43:52.053 clat (msec): min=12, max=149, avg=60.96, stdev=22.89 00:43:52.053 lat (msec): min=12, max=149, avg=60.97, stdev=22.89 00:43:52.053 clat percentiles (msec): 00:43:52.053 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 45], 00:43:52.053 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 60], 60.00th=[ 63], 00:43:52.053 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 94], 95.00th=[ 103], 00:43:52.053 | 99.00th=[ 128], 99.50th=[ 148], 99.90th=[ 150], 99.95th=[ 150], 00:43:52.053 | 99.99th=[ 150] 00:43:52.053 bw ( KiB/s): min= 560, max= 1920, per=4.47%, avg=1045.20, stdev=265.38, samples=20 00:43:52.053 iops : min= 140, max= 480, avg=261.30, stdev=66.35, samples=20 00:43:52.053 lat (msec) : 20=3.65%, 50=34.92%, 100=55.34%, 250=6.09% 00:43:52.053 cpu : usr=34.52%, sys=1.22%, ctx=985, majf=0, minf=9 00:43:52.053 IO depths : 1=1.2%, 2=2.6%, 4=9.5%, 8=74.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:43:52.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.053 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.053 issued rwts: total=2629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.053 filename2: (groupid=0, jobs=1): err= 0: pid=109562: Thu Dec 5 11:27:14 2024 00:43:52.053 read: IOPS=274, BW=1100KiB/s (1126kB/s)(10.8MiB/10026msec) 00:43:52.053 slat (usec): min=5, max=4028, avg=12.51, stdev=76.74 00:43:52.053 clat (msec): min=12, max=131, avg=58.08, stdev=19.63 00:43:52.053 lat (msec): min=12, max=131, avg=58.09, stdev=19.63 00:43:52.053 clat percentiles (msec): 00:43:52.053 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 41], 00:43:52.053 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 63], 00:43:52.053 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 96], 00:43:52.053 | 99.00th=[ 108], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 132], 00:43:52.053 | 99.99th=[ 132] 00:43:52.053 bw ( KiB/s): min= 720, max= 1592, per=4.69%, avg=1096.40, stdev=219.51, samples=20 00:43:52.053 iops : min= 180, max= 398, avg=274.10, stdev=54.88, samples=20 00:43:52.053 lat (msec) : 20=1.74%, 50=40.59%, 100=54.99%, 250=2.68% 00:43:52.053 cpu : usr=40.28%, sys=1.62%, ctx=1060, majf=0, minf=9 00:43:52.053 IO depths : 1=0.5%, 2=1.2%, 4=6.6%, 8=78.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:43:52.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.053 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:52.053 issued rwts: total=2757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:52.053 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:52.053 00:43:52.053 Run status group 0 (all jobs): 00:43:52.053 READ: bw=22.8MiB/s (23.9MB/s), 864KiB/s-1178KiB/s (885kB/s-1206kB/s), io=229MiB (241MB), run=10001-10056msec 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 bdev_null0 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 [2024-12-05 11:27:15.046691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 bdev_null1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:52.053 { 00:43:52.053 "params": { 00:43:52.053 "name": "Nvme$subsystem", 00:43:52.053 "trtype": "$TEST_TRANSPORT", 00:43:52.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:52.053 "adrfam": "ipv4", 00:43:52.053 "trsvcid": "$NVMF_PORT", 00:43:52.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:52.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:52.053 "hdgst": ${hdgst:-false}, 00:43:52.053 "ddgst": ${ddgst:-false} 00:43:52.053 }, 00:43:52.053 "method": "bdev_nvme_attach_controller" 00:43:52.053 } 00:43:52.053 EOF 00:43:52.053 )") 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:52.053 { 00:43:52.053 "params": { 00:43:52.053 "name": "Nvme$subsystem", 00:43:52.053 "trtype": "$TEST_TRANSPORT", 00:43:52.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:52.053 "adrfam": "ipv4", 00:43:52.053 "trsvcid": "$NVMF_PORT", 00:43:52.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:52.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:52.053 "hdgst": ${hdgst:-false}, 00:43:52.053 "ddgst": ${ddgst:-false} 00:43:52.053 }, 00:43:52.053 "method": "bdev_nvme_attach_controller" 00:43:52.053 } 00:43:52.053 EOF 00:43:52.053 )") 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:52.053 "params": { 00:43:52.053 "name": "Nvme0", 00:43:52.053 "trtype": "tcp", 00:43:52.053 "traddr": "10.0.0.2", 00:43:52.053 "adrfam": "ipv4", 00:43:52.053 "trsvcid": "4420", 00:43:52.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:52.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:52.053 "hdgst": false, 00:43:52.053 "ddgst": false 00:43:52.053 }, 00:43:52.053 "method": "bdev_nvme_attach_controller" 00:43:52.053 },{ 00:43:52.053 "params": { 00:43:52.053 "name": "Nvme1", 00:43:52.053 "trtype": "tcp", 00:43:52.053 "traddr": "10.0.0.2", 00:43:52.053 "adrfam": "ipv4", 00:43:52.053 "trsvcid": "4420", 00:43:52.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:52.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:52.053 "hdgst": false, 00:43:52.053 "ddgst": false 00:43:52.053 }, 00:43:52.053 "method": "bdev_nvme_attach_controller" 00:43:52.053 }' 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:52.053 11:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:52.053 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:52.053 ... 00:43:52.053 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:52.053 ... 00:43:52.053 fio-3.35 00:43:52.053 Starting 4 threads 00:43:57.359 00:43:57.359 filename0: (groupid=0, jobs=1): err= 0: pid=109691: Thu Dec 5 11:27:20 2024 00:43:57.359 read: IOPS=2234, BW=17.5MiB/s (18.3MB/s)(87.3MiB/5001msec) 00:43:57.359 slat (usec): min=5, max=260, avg=15.88, stdev=11.44 00:43:57.359 clat (usec): min=1391, max=6017, avg=3497.48, stdev=294.48 00:43:57.359 lat (usec): min=1398, max=6040, avg=3513.36, stdev=294.56 00:43:57.359 clat percentiles (usec): 00:43:57.359 | 1.00th=[ 2540], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3392], 00:43:57.359 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3458], 60.00th=[ 3490], 00:43:57.359 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3687], 95.00th=[ 3818], 00:43:57.359 | 99.00th=[ 4948], 99.50th=[ 5276], 99.90th=[ 5735], 99.95th=[ 5800], 00:43:57.359 | 99.99th=[ 5932] 00:43:57.359 bw ( KiB/s): min=17408, max=18160, per=24.94%, avg=17838.56, stdev=219.13, samples=9 00:43:57.359 iops : min= 2176, max= 2270, avg=2229.78, stdev=27.39, samples=9 00:43:57.359 lat (msec) : 2=0.40%, 4=97.43%, 10=2.17% 00:43:57.359 cpu : usr=92.42%, sys=5.90%, ctx=81, majf=0, minf=0 00:43:57.359 IO depths : 1=8.8%, 2=24.4%, 4=50.6%, 8=16.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 issued rwts: total=11176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.359 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:57.359 filename0: (groupid=0, jobs=1): err= 0: pid=109692: Thu Dec 5 11:27:20 2024 00:43:57.359 read: IOPS=2235, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5002msec) 00:43:57.359 slat (nsec): min=5561, max=71170, avg=14253.17, stdev=10780.19 00:43:57.359 clat (usec): min=1800, max=4948, avg=3513.25, stdev=192.71 00:43:57.359 lat (usec): min=1812, max=4957, avg=3527.50, stdev=191.84 00:43:57.359 clat percentiles (usec): 00:43:57.359 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:43:57.359 | 30.00th=[ 3458], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:43:57.359 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3720], 95.00th=[ 3818], 00:43:57.359 | 99.00th=[ 4080], 99.50th=[ 4178], 99.90th=[ 4424], 99.95th=[ 4490], 00:43:57.359 | 99.99th=[ 4817] 00:43:57.359 bw ( KiB/s): min=17280, max=18304, per=24.99%, avg=17877.33, stdev=313.53, samples=9 00:43:57.359 iops : min= 2160, max= 2288, avg=2234.67, stdev=39.19, samples=9 00:43:57.359 lat (msec) : 2=0.21%, 4=98.10%, 10=1.70% 00:43:57.359 cpu : usr=94.12%, sys=4.78%, ctx=3, majf=0, minf=0 00:43:57.359 IO depths : 1=9.7%, 2=24.9%, 4=50.1%, 8=15.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 issued rwts: total=11184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.359 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:57.359 filename1: (groupid=0, jobs=1): err= 0: pid=109693: Thu Dec 5 11:27:20 2024 00:43:57.359 read: IOPS=2234, BW=17.5MiB/s (18.3MB/s)(87.3MiB/5002msec) 00:43:57.359 slat (usec): min=3, max=701, avg=16.30, stdev=12.20 00:43:57.359 clat (usec): min=958, max=6061, avg=3521.94, stdev=279.20 00:43:57.359 lat (usec): min=965, max=6076, avg=3538.24, stdev=278.83 00:43:57.359 clat percentiles (usec): 00:43:57.359 | 1.00th=[ 2737], 5.00th=[ 3064], 10.00th=[ 3359], 20.00th=[ 3425], 00:43:57.359 | 30.00th=[ 3458], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:43:57.359 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3818], 95.00th=[ 3982], 00:43:57.359 | 99.00th=[ 4424], 99.50th=[ 4490], 99.90th=[ 5276], 99.95th=[ 5276], 00:43:57.359 | 99.99th=[ 5735] 00:43:57.359 bw ( KiB/s): min=17408, max=18128, per=24.94%, avg=17845.33, stdev=218.21, samples=9 00:43:57.359 iops : min= 2176, max= 2266, avg=2230.67, stdev=27.28, samples=9 00:43:57.359 lat (usec) : 1000=0.03% 00:43:57.359 lat (msec) : 2=0.31%, 4=94.77%, 10=4.89% 00:43:57.359 cpu : usr=92.98%, sys=5.74%, ctx=45, majf=0, minf=1 00:43:57.359 IO depths : 1=3.6%, 2=9.3%, 4=65.6%, 8=21.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 issued rwts: total=11179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.359 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:57.359 filename1: (groupid=0, jobs=1): err= 0: pid=109694: Thu Dec 5 11:27:20 2024 00:43:57.359 read: IOPS=2237, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5001msec) 00:43:57.359 slat (nsec): min=6555, max=56345, avg=10819.09, stdev=5131.30 00:43:57.359 clat (usec): min=1061, max=4971, avg=3527.18, stdev=221.26 00:43:57.359 lat (usec): min=1069, max=5009, avg=3538.00, stdev=221.12 00:43:57.359 clat percentiles (usec): 00:43:57.359 | 1.00th=[ 2966], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3425], 00:43:57.359 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:43:57.359 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3752], 95.00th=[ 3884], 00:43:57.359 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 4817], 00:43:57.359 | 99.99th=[ 4948] 00:43:57.359 bw ( KiB/s): min=17408, max=18340, per=25.01%, avg=17892.00, stdev=288.42, samples=9 00:43:57.359 iops : min= 2176, max= 2292, avg=2236.44, stdev=35.96, samples=9 00:43:57.359 lat (msec) : 2=0.27%, 4=96.98%, 10=2.75% 00:43:57.359 cpu : usr=92.78%, sys=6.00%, ctx=6, majf=0, minf=0 00:43:57.359 IO depths : 1=8.2%, 2=18.1%, 4=56.9%, 8=16.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:57.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:57.359 issued rwts: total=11190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:57.359 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:57.359 00:43:57.359 Run status group 0 (all jobs): 00:43:57.359 READ: bw=69.9MiB/s (73.3MB/s), 17.5MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=349MiB (366MB), run=5001-5002msec 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.359 00:43:57.359 real 0m23.777s 00:43:57.359 user 2m4.459s 00:43:57.359 sys 0m7.027s 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:57.359 ************************************ 00:43:57.359 END TEST fio_dif_rand_params 00:43:57.359 ************************************ 00:43:57.359 11:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:57.359 11:27:21 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:57.359 11:27:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:57.359 11:27:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:57.359 11:27:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:57.359 ************************************ 00:43:57.359 START TEST fio_dif_digest 00:43:57.359 ************************************ 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:57.359 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:57.360 bdev_null0 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:57.360 [2024-12-05 11:27:21.323700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:43:57.360 { 00:43:57.360 "params": { 00:43:57.360 "name": "Nvme$subsystem", 00:43:57.360 "trtype": "$TEST_TRANSPORT", 00:43:57.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:57.360 "adrfam": "ipv4", 00:43:57.360 "trsvcid": "$NVMF_PORT", 00:43:57.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:57.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:57.360 "hdgst": ${hdgst:-false}, 00:43:57.360 "ddgst": ${ddgst:-false} 00:43:57.360 }, 00:43:57.360 "method": "bdev_nvme_attach_controller" 00:43:57.360 } 00:43:57.360 EOF 00:43:57.360 )") 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:43:57.360 "params": { 00:43:57.360 "name": "Nvme0", 00:43:57.360 "trtype": "tcp", 00:43:57.360 "traddr": "10.0.0.2", 00:43:57.360 "adrfam": "ipv4", 00:43:57.360 "trsvcid": "4420", 00:43:57.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:57.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:57.360 "hdgst": true, 00:43:57.360 "ddgst": true 00:43:57.360 }, 00:43:57.360 "method": "bdev_nvme_attach_controller" 00:43:57.360 }' 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:57.360 11:27:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:57.360 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:57.360 ... 00:43:57.360 fio-3.35 00:43:57.360 Starting 3 threads 00:44:09.549 00:44:09.549 filename0: (groupid=0, jobs=1): err= 0: pid=109800: Thu Dec 5 11:27:32 2024 00:44:09.549 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(313MiB/10008msec) 00:44:09.549 slat (nsec): min=6407, max=45465, avg=13339.34, stdev=3659.77 00:44:09.549 clat (usec): min=5640, max=53558, avg=11978.45, stdev=2960.25 00:44:09.549 lat (usec): min=5650, max=53568, avg=11991.79, stdev=2960.24 00:44:09.549 clat percentiles (usec): 00:44:09.549 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:44:09.549 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:44:09.549 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:44:09.549 | 99.00th=[13960], 99.50th=[15139], 99.90th=[53216], 99.95th=[53216], 00:44:09.549 | 99.99th=[53740] 00:44:09.549 bw ( KiB/s): min=27904, max=35328, per=37.41%, avg=31973.05, stdev=2049.59, samples=19 00:44:09.549 iops : min= 218, max= 276, avg=249.79, stdev=16.01, samples=19 00:44:09.549 lat (msec) : 10=2.08%, 20=97.44%, 100=0.48% 00:44:09.549 cpu : usr=89.13%, sys=9.44%, ctx=12, majf=0, minf=0 00:44:09.549 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:09.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.549 issued rwts: total=2503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:09.549 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:09.549 filename0: (groupid=0, jobs=1): err= 0: pid=109801: Thu Dec 5 11:27:32 2024 00:44:09.549 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(298MiB/10004msec) 00:44:09.549 slat (nsec): min=6385, max=43008, avg=12629.14, stdev=3760.81 00:44:09.549 clat (usec): min=3191, max=15977, avg=12561.13, stdev=1326.65 00:44:09.549 lat (usec): min=3201, max=15994, avg=12573.76, stdev=1326.74 00:44:09.549 clat percentiles (usec): 00:44:09.549 | 1.00th=[ 7635], 5.00th=[10552], 10.00th=[11076], 20.00th=[11731], 00:44:09.549 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:44:09.549 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:44:09.549 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15795], 99.95th=[15926], 00:44:09.549 | 99.99th=[15926] 00:44:09.549 bw ( KiB/s): min=27904, max=33280, per=35.66%, avg=30480.68, stdev=1572.84, samples=19 00:44:09.549 iops : min= 218, max= 260, avg=238.11, stdev=12.28, samples=19 00:44:09.549 lat (msec) : 4=0.04%, 10=2.93%, 20=97.02% 00:44:09.549 cpu : usr=89.03%, sys=9.59%, ctx=10, majf=0, minf=0 00:44:09.549 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:09.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.549 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:09.549 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:09.549 filename0: (groupid=0, jobs=1): err= 0: pid=109802: Thu Dec 5 11:27:32 2024 00:44:09.549 read: IOPS=179, BW=22.4MiB/s (23.5MB/s)(224MiB/10003msec) 00:44:09.549 slat (nsec): min=5021, max=49991, avg=12485.77, stdev=3179.95 00:44:09.549 clat (usec): min=9633, max=20892, avg=16709.55, stdev=1602.07 00:44:09.549 lat (usec): min=9643, max=20908, avg=16722.03, stdev=1602.38 00:44:09.549 clat percentiles (usec): 00:44:09.549 | 1.00th=[10683], 5.00th=[14353], 10.00th=[15008], 20.00th=[15533], 00:44:09.549 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16909], 60.00th=[17171], 00:44:09.549 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:44:09.549 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:44:09.549 | 99.99th=[20841] 00:44:09.549 bw ( KiB/s): min=20736, max=25856, per=26.84%, avg=22945.68, stdev=1300.35, samples=19 00:44:09.549 iops : min= 162, max= 202, avg=179.26, stdev=10.16, samples=19 00:44:09.549 lat (msec) : 10=0.06%, 20=99.39%, 50=0.56% 00:44:09.549 cpu : usr=90.37%, sys=8.48%, ctx=6, majf=0, minf=0 00:44:09.549 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:09.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:09.549 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:09.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:44:09.550 00:44:09.550 Run status group 0 (all jobs): 00:44:09.550 READ: bw=83.5MiB/s (87.5MB/s), 22.4MiB/s-31.3MiB/s (23.5MB/s-32.8MB/s), io=835MiB (876MB), run=10003-10008msec 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:09.550 ************************************ 00:44:09.550 END TEST fio_dif_digest 00:44:09.550 ************************************ 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:09.550 00:44:09.550 real 0m11.128s 00:44:09.550 user 0m27.613s 00:44:09.550 sys 0m3.072s 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:09.550 11:27:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:44:09.550 11:27:32 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:44:09.550 11:27:32 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:44:09.550 rmmod nvme_tcp 00:44:09.550 rmmod nvme_fabrics 00:44:09.550 rmmod nvme_keyring 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 109041 ']' 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 109041 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 109041 ']' 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 109041 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109041 00:44:09.550 killing process with pid 109041 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109041' 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@973 -- # kill 109041 00:44:09.550 11:27:32 nvmf_dif -- common/autotest_common.sh@978 -- # wait 109041 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:44:09.550 11:27:32 nvmf_dif -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:09.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:09.550 Waiting for block devices as requested 00:44:09.550 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:44:09.550 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:09.550 11:27:33 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:44:09.550 11:27:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:44:09.550 11:27:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:44:09.550 11:27:33 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:44:09.550 11:27:33 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:44:09.550 11:27:33 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:44:09.550 11:27:33 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:44:09.550 00:44:09.550 real 1m1.371s 00:44:09.550 user 3m46.559s 00:44:09.550 sys 0m21.785s 00:44:09.550 ************************************ 00:44:09.550 END TEST nvmf_dif 00:44:09.550 ************************************ 00:44:09.550 11:27:33 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:09.550 11:27:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:09.550 11:27:33 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:09.550 11:27:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:09.550 11:27:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:09.550 11:27:33 -- common/autotest_common.sh@10 -- # set +x 00:44:09.550 ************************************ 00:44:09.550 START TEST nvmf_abort_qd_sizes 00:44:09.550 ************************************ 00:44:09.550 11:27:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:44:09.550 * Looking for test storage... 00:44:09.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.550 --rc genhtml_branch_coverage=1 00:44:09.550 --rc genhtml_function_coverage=1 00:44:09.550 --rc genhtml_legend=1 00:44:09.550 --rc geninfo_all_blocks=1 00:44:09.550 --rc geninfo_unexecuted_blocks=1 00:44:09.550 00:44:09.550 ' 00:44:09.550 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:09.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.550 --rc genhtml_branch_coverage=1 00:44:09.550 --rc genhtml_function_coverage=1 00:44:09.550 --rc genhtml_legend=1 00:44:09.550 --rc geninfo_all_blocks=1 00:44:09.550 --rc geninfo_unexecuted_blocks=1 00:44:09.550 00:44:09.550 ' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.551 --rc genhtml_branch_coverage=1 00:44:09.551 --rc genhtml_function_coverage=1 00:44:09.551 --rc genhtml_legend=1 00:44:09.551 --rc geninfo_all_blocks=1 00:44:09.551 --rc geninfo_unexecuted_blocks=1 00:44:09.551 00:44:09.551 ' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:09.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:09.551 --rc genhtml_branch_coverage=1 00:44:09.551 --rc genhtml_function_coverage=1 00:44:09.551 --rc genhtml_legend=1 00:44:09.551 --rc geninfo_all_blocks=1 00:44:09.551 --rc geninfo_unexecuted_blocks=1 00:44:09.551 00:44:09.551 ' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:44:09.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@280 -- # nvmf_veth_init 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@223 -- # create_target_ns 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # create_main_bridge 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@105 -- # delete_main_bridge 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator0 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:44:09.551 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target0 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0 up 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target0_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target0 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:44:09.810 10.0.0.1 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:44:09.810 10.0.0.2 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator0 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target0_br 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:44:09.810 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator1 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target1 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1 up 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target1 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772163 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:44:09.811 10.0.0.3 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772164 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:44:09.811 10.0.0.4 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator1 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:44:09.811 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target1_br 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 2 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:44:10.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:10.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:44:10.071 00:44:10.071 --- 10.0.0.1 ping statistics --- 00:44:10.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:10.071 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:44:10.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:10.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:44:10.071 00:44:10.071 --- 10.0.0.2 ping statistics --- 00:44:10.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:10.071 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:44:10.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:10.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:44:10.071 00:44:10.071 --- 10.0.0.3 ping statistics --- 00:44:10.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:10.071 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:44:10.071 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:44:10.072 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:10.072 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:44:10.072 00:44:10.072 --- 10.0.0.4 ping statistics --- 00:44:10.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:10.072 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # return 0 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:44:10.072 11:27:34 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:11.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:11.017 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:11.017 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:44:11.017 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:44:11.018 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:44:11.276 ' 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=110460 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 110460 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 110460 ']' 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:11.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:11.276 11:27:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:11.276 [2024-12-05 11:27:35.779274] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:44:11.276 [2024-12-05 11:27:35.779377] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:11.535 [2024-12-05 11:27:35.938310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:11.535 [2024-12-05 11:27:36.004710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:11.535 [2024-12-05 11:27:36.004778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:11.535 [2024-12-05 11:27:36.004794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:11.535 [2024-12-05 11:27:36.004808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:11.535 [2024-12-05 11:27:36.004819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:11.535 [2024-12-05 11:27:36.006049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:11.535 [2024-12-05 11:27:36.006177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:11.535 [2024-12-05 11:27:36.006275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:11.535 [2024-12-05 11:27:36.006277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:44:11.535 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:11.793 11:27:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:11.793 ************************************ 00:44:11.793 START TEST spdk_target_abort 00:44:11.793 ************************************ 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.793 spdk_targetn1 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.793 [2024-12-05 11:27:36.305454] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.793 [2024-12-05 11:27:36.345775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.793 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:11.794 11:27:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:15.147 Initializing NVMe Controllers 00:44:15.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:15.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:15.147 Initialization complete. Launching workers. 00:44:15.147 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12654, failed: 0 00:44:15.147 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1082, failed to submit 11572 00:44:15.147 success 752, unsuccessful 330, failed 0 00:44:15.147 11:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:15.147 11:27:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:18.429 Initializing NVMe Controllers 00:44:18.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:18.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:18.429 Initialization complete. Launching workers. 00:44:18.429 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5897, failed: 0 00:44:18.429 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1224, failed to submit 4673 00:44:18.429 success 244, unsuccessful 980, failed 0 00:44:18.429 11:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:18.429 11:27:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:21.730 Initializing NVMe Controllers 00:44:21.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:21.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:21.730 Initialization complete. Launching workers. 00:44:21.730 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30049, failed: 0 00:44:21.730 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2494, failed to submit 27555 00:44:21.730 success 458, unsuccessful 2036, failed 0 00:44:21.730 11:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:21.730 11:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.730 11:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:21.730 11:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:21.730 11:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:21.730 11:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:21.730 11:27:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110460 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 110460 ']' 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 110460 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110460 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110460' 00:44:22.659 killing process with pid 110460 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 110460 00:44:22.659 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 110460 00:44:22.917 00:44:22.917 real 0m11.287s 00:44:22.917 user 0m42.611s 00:44:22.917 sys 0m2.136s 00:44:22.917 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:22.917 ************************************ 00:44:22.917 END TEST spdk_target_abort 00:44:22.917 ************************************ 00:44:22.917 11:27:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:22.917 11:27:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:22.917 11:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:22.917 11:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:22.917 11:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:23.175 ************************************ 00:44:23.175 START TEST kernel_target_abort 00:44:23.175 ************************************ 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:23.175 11:27:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:23.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:23.433 Waiting for block devices as requested 00:44:23.707 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:44:23.707 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:44:23.966 No valid GPT data, bailing 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:44:23.966 No valid GPT data, bailing 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:44:23.966 No valid GPT data, bailing 00:44:23.966 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:44:24.224 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:44:24.225 No valid GPT data, bailing 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 --hostid=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 -a 10.0.0.1 -t tcp -s 4420 00:44:24.225 00:44:24.225 Discovery Log Number of Records 2, Generation counter 2 00:44:24.225 =====Discovery Log Entry 0====== 00:44:24.225 trtype: tcp 00:44:24.225 adrfam: ipv4 00:44:24.225 subtype: current discovery subsystem 00:44:24.225 treq: not specified, sq flow control disable supported 00:44:24.225 portid: 1 00:44:24.225 trsvcid: 4420 00:44:24.225 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:24.225 traddr: 10.0.0.1 00:44:24.225 eflags: none 00:44:24.225 sectype: none 00:44:24.225 =====Discovery Log Entry 1====== 00:44:24.225 trtype: tcp 00:44:24.225 adrfam: ipv4 00:44:24.225 subtype: nvme subsystem 00:44:24.225 treq: not specified, sq flow control disable supported 00:44:24.225 portid: 1 00:44:24.225 trsvcid: 4420 00:44:24.225 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:24.225 traddr: 10.0.0.1 00:44:24.225 eflags: none 00:44:24.225 sectype: none 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:24.225 11:27:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:27.523 Initializing NVMe Controllers 00:44:27.523 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:27.523 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:27.523 Initialization complete. Launching workers. 00:44:27.523 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41566, failed: 0 00:44:27.523 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41566, failed to submit 0 00:44:27.523 success 0, unsuccessful 41566, failed 0 00:44:27.523 11:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:27.523 11:27:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:30.807 Initializing NVMe Controllers 00:44:30.807 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:30.807 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:30.807 Initialization complete. Launching workers. 00:44:30.807 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78050, failed: 0 00:44:30.807 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32939, failed to submit 45111 00:44:30.807 success 0, unsuccessful 32939, failed 0 00:44:30.807 11:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:30.807 11:27:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:34.124 Initializing NVMe Controllers 00:44:34.124 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:34.124 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:34.124 Initialization complete. Launching workers. 00:44:34.124 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99500, failed: 0 00:44:34.124 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24834, failed to submit 74666 00:44:34.124 success 0, unsuccessful 24834, failed 0 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:44:34.124 11:27:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:34.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:37.974 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:37.974 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:44:37.974 ************************************ 00:44:37.974 END TEST kernel_target_abort 00:44:37.974 ************************************ 00:44:37.974 00:44:37.974 real 0m14.557s 00:44:37.974 user 0m6.420s 00:44:37.974 sys 0m5.477s 00:44:37.974 11:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:37.974 11:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:44:37.974 rmmod nvme_tcp 00:44:37.974 rmmod nvme_fabrics 00:44:37.974 rmmod nvme_keyring 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 110460 ']' 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 110460 00:44:37.974 11:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 110460 ']' 00:44:37.975 11:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 110460 00:44:37.975 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (110460) - No such process 00:44:37.975 Process with pid 110460 is not found 00:44:37.975 11:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 110460 is not found' 00:44:37.975 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:44:37.975 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:38.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:44:38.233 Waiting for block devices as requested 00:44:38.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:44:38.492 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:38.492 11:28:02 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:44:38.492 11:28:02 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:44:38.492 11:28:02 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:44:38.492 11:28:02 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:44:38.492 11:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:44:38.492 11:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:44:38.492 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:44:38.751 00:44:38.751 real 0m29.261s 00:44:38.751 user 0m50.367s 00:44:38.751 sys 0m9.468s 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:38.751 ************************************ 00:44:38.751 END TEST nvmf_abort_qd_sizes 00:44:38.751 11:28:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:38.751 ************************************ 00:44:38.751 11:28:03 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:44:38.751 11:28:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:38.751 11:28:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:38.751 11:28:03 -- common/autotest_common.sh@10 -- # set +x 00:44:38.751 ************************************ 00:44:38.751 START TEST keyring_file 00:44:38.751 ************************************ 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:44:38.751 * Looking for test storage... 00:44:38.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:38.751 11:28:03 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:38.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.751 --rc genhtml_branch_coverage=1 00:44:38.751 --rc genhtml_function_coverage=1 00:44:38.751 --rc genhtml_legend=1 00:44:38.751 --rc geninfo_all_blocks=1 00:44:38.751 --rc geninfo_unexecuted_blocks=1 00:44:38.751 00:44:38.751 ' 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:38.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.751 --rc genhtml_branch_coverage=1 00:44:38.751 --rc genhtml_function_coverage=1 00:44:38.751 --rc genhtml_legend=1 00:44:38.751 --rc geninfo_all_blocks=1 00:44:38.751 --rc geninfo_unexecuted_blocks=1 00:44:38.751 00:44:38.751 ' 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:38.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.751 --rc genhtml_branch_coverage=1 00:44:38.751 --rc genhtml_function_coverage=1 00:44:38.751 --rc genhtml_legend=1 00:44:38.751 --rc geninfo_all_blocks=1 00:44:38.751 --rc geninfo_unexecuted_blocks=1 00:44:38.751 00:44:38.751 ' 00:44:38.751 11:28:03 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:38.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.751 --rc genhtml_branch_coverage=1 00:44:38.751 --rc genhtml_function_coverage=1 00:44:38.751 --rc genhtml_legend=1 00:44:38.751 --rc geninfo_all_blocks=1 00:44:38.751 --rc geninfo_unexecuted_blocks=1 00:44:38.751 00:44:38.751 ' 00:44:38.751 11:28:03 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:44:38.751 11:28:03 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:38.751 11:28:03 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:39.011 11:28:03 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:39.011 11:28:03 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:39.011 11:28:03 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:39.011 11:28:03 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:39.011 11:28:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.011 11:28:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.011 11:28:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.011 11:28:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:39.011 11:28:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:44:39.011 11:28:03 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:44:39.011 11:28:03 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:44:39.011 11:28:03 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@50 -- # : 0 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:44:39.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4gp4TRSlBp 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@507 -- # python - 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4gp4TRSlBp 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4gp4TRSlBp 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.4gp4TRSlBp 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4eLANzQR9T 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:44:39.011 11:28:03 keyring_file -- nvmf/common.sh@507 -- # python - 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4eLANzQR9T 00:44:39.011 11:28:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4eLANzQR9T 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.4eLANzQR9T 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=111365 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:39.011 11:28:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111365 00:44:39.011 11:28:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111365 ']' 00:44:39.011 11:28:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:39.011 11:28:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:39.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:39.011 11:28:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:39.011 11:28:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:39.011 11:28:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:39.011 [2024-12-05 11:28:03.627734] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:44:39.011 [2024-12-05 11:28:03.627846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111365 ] 00:44:39.270 [2024-12-05 11:28:03.786762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:39.270 [2024-12-05 11:28:03.850505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:40.207 11:28:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:40.207 [2024-12-05 11:28:04.662821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:40.207 null0 00:44:40.207 [2024-12-05 11:28:04.694803] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:40.207 [2024-12-05 11:28:04.695014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:40.207 11:28:04 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:40.207 11:28:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:40.207 [2024-12-05 11:28:04.726799] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:40.207 2024/12/05 11:28:04 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:44:40.207 request: 00:44:40.207 { 00:44:40.208 "method": "nvmf_subsystem_add_listener", 00:44:40.208 "params": { 00:44:40.208 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:40.208 "secure_channel": false, 00:44:40.208 "listen_address": { 00:44:40.208 "trtype": "tcp", 00:44:40.208 "traddr": "127.0.0.1", 00:44:40.208 "trsvcid": "4420" 00:44:40.208 } 00:44:40.208 } 00:44:40.208 } 00:44:40.208 Got JSON-RPC error response 00:44:40.208 GoRPCClient: error on JSON-RPC call 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:40.208 11:28:04 keyring_file -- keyring/file.sh@47 -- # bperfpid=111400 00:44:40.208 11:28:04 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:40.208 11:28:04 keyring_file -- keyring/file.sh@49 -- # waitforlisten 111400 /var/tmp/bperf.sock 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111400 ']' 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:40.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:40.208 11:28:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:40.208 [2024-12-05 11:28:04.799218] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:44:40.208 [2024-12-05 11:28:04.799349] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111400 ] 00:44:40.466 [2024-12-05 11:28:04.951039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.466 [2024-12-05 11:28:05.007704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:41.407 11:28:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:41.407 11:28:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:41.407 11:28:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:41.407 11:28:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:41.665 11:28:06 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4eLANzQR9T 00:44:41.665 11:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4eLANzQR9T 00:44:41.924 11:28:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:41.924 11:28:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:41.924 11:28:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:41.924 11:28:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:41.924 11:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.183 11:28:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.4gp4TRSlBp == \/\t\m\p\/\t\m\p\.\4\g\p\4\T\R\S\l\B\p ]] 00:44:42.183 11:28:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:42.183 11:28:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:42.183 11:28:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.183 11:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.183 11:28:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:42.442 11:28:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.4eLANzQR9T == \/\t\m\p\/\t\m\p\.\4\e\L\A\N\z\Q\R\9\T ]] 00:44:42.442 11:28:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:42.442 11:28:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:42.442 11:28:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.442 11:28:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.442 11:28:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.442 11:28:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:42.700 11:28:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:42.700 11:28:07 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:42.700 11:28:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:42.700 11:28:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.700 11:28:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.700 11:28:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.700 11:28:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:42.959 11:28:07 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:42.959 11:28:07 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:42.959 11:28:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:43.217 [2024-12-05 11:28:07.673267] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:43.217 nvme0n1 00:44:43.217 11:28:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:43.217 11:28:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:43.217 11:28:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:43.217 11:28:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:43.217 11:28:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.217 11:28:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.475 11:28:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:43.475 11:28:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:43.475 11:28:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:43.475 11:28:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:43.475 11:28:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:43.475 11:28:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.475 11:28:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.733 11:28:08 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:43.734 11:28:08 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:43.734 Running I/O for 1 seconds... 00:44:45.135 15637.00 IOPS, 61.08 MiB/s 00:44:45.135 Latency(us) 00:44:45.135 [2024-12-05T11:28:09.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:45.135 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:45.135 nvme0n1 : 1.01 15680.41 61.25 0.00 0.00 8145.69 3495.25 14043.43 00:44:45.135 [2024-12-05T11:28:09.787Z] =================================================================================================================== 00:44:45.135 [2024-12-05T11:28:09.787Z] Total : 15680.41 61.25 0.00 0.00 8145.69 3495.25 14043.43 00:44:45.135 { 00:44:45.135 "results": [ 00:44:45.135 { 00:44:45.135 "job": "nvme0n1", 00:44:45.135 "core_mask": "0x2", 00:44:45.135 "workload": "randrw", 00:44:45.135 "percentage": 50, 00:44:45.135 "status": "finished", 00:44:45.135 "queue_depth": 128, 00:44:45.135 "io_size": 4096, 00:44:45.135 "runtime": 1.005522, 00:44:45.135 "iops": 15680.41276073522, 00:44:45.135 "mibps": 61.251612346621954, 00:44:45.135 "io_failed": 0, 00:44:45.135 "io_timeout": 0, 00:44:45.135 "avg_latency_us": 8145.68537566406, 00:44:45.135 "min_latency_us": 3495.2533333333336, 00:44:45.135 "max_latency_us": 14043.42857142857 00:44:45.135 } 00:44:45.135 ], 00:44:45.135 "core_count": 1 00:44:45.135 } 00:44:45.135 11:28:09 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:45.135 11:28:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:45.135 11:28:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:45.135 11:28:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:45.135 11:28:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:45.135 11:28:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:45.135 11:28:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.135 11:28:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.412 11:28:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:45.412 11:28:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:45.412 11:28:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:45.412 11:28:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:45.412 11:28:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:45.412 11:28:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.412 11:28:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:45.672 11:28:10 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:45.672 11:28:10 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:45.672 11:28:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:45.672 11:28:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:45.672 11:28:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:45.672 11:28:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:45.672 11:28:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:45.672 11:28:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:45.672 11:28:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:45.672 11:28:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:45.932 [2024-12-05 11:28:10.518765] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:45.932 [2024-12-05 11:28:10.519036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5e530 (107): Transport endpoint is not connected 00:44:45.932 [2024-12-05 11:28:10.520022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5e530 (9): Bad file descriptor 00:44:45.932 [2024-12-05 11:28:10.521020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:45.932 [2024-12-05 11:28:10.521043] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:45.932 [2024-12-05 11:28:10.521054] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:45.932 [2024-12-05 11:28:10.521067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:45.932 2024/12/05 11:28:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:44:45.932 request: 00:44:45.932 { 00:44:45.932 "method": "bdev_nvme_attach_controller", 00:44:45.932 "params": { 00:44:45.932 "name": "nvme0", 00:44:45.932 "trtype": "tcp", 00:44:45.932 "traddr": "127.0.0.1", 00:44:45.932 "adrfam": "ipv4", 00:44:45.932 "trsvcid": "4420", 00:44:45.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:45.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:45.932 "prchk_reftag": false, 00:44:45.932 "prchk_guard": false, 00:44:45.932 "hdgst": false, 00:44:45.932 "ddgst": false, 00:44:45.932 "psk": "key1", 00:44:45.932 "allow_unrecognized_csi": false 00:44:45.932 } 00:44:45.932 } 00:44:45.932 Got JSON-RPC error response 00:44:45.932 GoRPCClient: error on JSON-RPC call 00:44:45.932 11:28:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:45.932 11:28:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:45.932 11:28:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:45.932 11:28:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:45.932 11:28:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:45.932 11:28:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:45.932 11:28:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:45.932 11:28:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:45.932 11:28:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:45.932 11:28:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:46.500 11:28:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:46.500 11:28:10 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:46.500 11:28:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:46.500 11:28:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:46.500 11:28:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:46.500 11:28:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:46.500 11:28:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:46.758 11:28:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:46.758 11:28:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:46.758 11:28:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:46.758 11:28:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:46.758 11:28:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:47.017 11:28:11 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:47.017 11:28:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:47.017 11:28:11 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:47.275 11:28:11 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:47.275 11:28:11 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.4gp4TRSlBp 00:44:47.275 11:28:11 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:47.275 11:28:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:47.275 11:28:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:47.275 11:28:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:47.275 11:28:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:47.275 11:28:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:47.275 11:28:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:47.275 11:28:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:47.275 11:28:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:47.534 [2024-12-05 11:28:12.187409] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4gp4TRSlBp': 0100660 00:44:47.534 [2024-12-05 11:28:12.187458] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:47.792 2024/12/05 11:28:12 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.4gp4TRSlBp], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:44:47.792 request: 00:44:47.792 { 00:44:47.792 "method": "keyring_file_add_key", 00:44:47.792 "params": { 00:44:47.792 "name": "key0", 00:44:47.792 "path": "/tmp/tmp.4gp4TRSlBp" 00:44:47.792 } 00:44:47.792 } 00:44:47.792 Got JSON-RPC error response 00:44:47.792 GoRPCClient: error on JSON-RPC call 00:44:47.792 11:28:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:47.792 11:28:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:47.792 11:28:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:47.792 11:28:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:47.792 11:28:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.4gp4TRSlBp 00:44:47.792 11:28:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:47.792 11:28:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4gp4TRSlBp 00:44:48.051 11:28:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.4gp4TRSlBp 00:44:48.051 11:28:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:48.051 11:28:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:48.051 11:28:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:48.051 11:28:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:48.051 11:28:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.051 11:28:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:48.309 11:28:12 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:48.309 11:28:12 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:48.309 11:28:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:48.309 11:28:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:48.309 11:28:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:48.309 11:28:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:48.309 11:28:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:48.309 11:28:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:48.310 11:28:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:48.310 11:28:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:48.568 [2024-12-05 11:28:12.991597] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.4gp4TRSlBp': No such file or directory 00:44:48.568 [2024-12-05 11:28:12.991640] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:48.568 [2024-12-05 11:28:12.991665] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:48.568 [2024-12-05 11:28:12.991678] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:48.568 [2024-12-05 11:28:12.991692] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:48.568 [2024-12-05 11:28:12.991705] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:48.568 2024/12/05 11:28:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:44:48.568 request: 00:44:48.568 { 00:44:48.569 "method": "bdev_nvme_attach_controller", 00:44:48.569 "params": { 00:44:48.569 "name": "nvme0", 00:44:48.569 "trtype": "tcp", 00:44:48.569 "traddr": "127.0.0.1", 00:44:48.569 "adrfam": "ipv4", 00:44:48.569 "trsvcid": "4420", 00:44:48.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:48.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:48.569 "prchk_reftag": false, 00:44:48.569 "prchk_guard": false, 00:44:48.569 "hdgst": false, 00:44:48.569 "ddgst": false, 00:44:48.569 "psk": "key0", 00:44:48.569 "allow_unrecognized_csi": false 00:44:48.569 } 00:44:48.569 } 00:44:48.569 Got JSON-RPC error response 00:44:48.569 GoRPCClient: error on JSON-RPC call 00:44:48.569 11:28:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:48.569 11:28:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:48.569 11:28:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:48.569 11:28:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:48.569 11:28:13 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:48.569 11:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:48.827 11:28:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2U9luZ9dsN 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:48.827 11:28:13 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:48.827 11:28:13 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:44:48.827 11:28:13 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:48.827 11:28:13 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:48.827 11:28:13 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:44:48.827 11:28:13 keyring_file -- nvmf/common.sh@507 -- # python - 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2U9luZ9dsN 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2U9luZ9dsN 00:44:48.827 11:28:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.2U9luZ9dsN 00:44:48.827 11:28:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2U9luZ9dsN 00:44:48.827 11:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2U9luZ9dsN 00:44:49.084 11:28:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:49.084 11:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:49.341 nvme0n1 00:44:49.341 11:28:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:49.341 11:28:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:49.341 11:28:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:49.341 11:28:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:49.341 11:28:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:49.341 11:28:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:49.600 11:28:14 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:49.600 11:28:14 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:49.600 11:28:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:49.857 11:28:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:49.857 11:28:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:49.857 11:28:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:49.857 11:28:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:49.857 11:28:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:50.420 11:28:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:50.420 11:28:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:50.420 11:28:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:50.420 11:28:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:50.420 11:28:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:50.420 11:28:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:50.420 11:28:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:50.420 11:28:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:50.420 11:28:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:50.420 11:28:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:50.696 11:28:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:50.696 11:28:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:50.696 11:28:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:50.953 11:28:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:50.953 11:28:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2U9luZ9dsN 00:44:50.953 11:28:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2U9luZ9dsN 00:44:51.215 11:28:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.4eLANzQR9T 00:44:51.215 11:28:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.4eLANzQR9T 00:44:51.473 11:28:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:51.473 11:28:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:51.731 nvme0n1 00:44:51.731 11:28:16 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:51.731 11:28:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:52.299 11:28:16 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:52.299 "subsystems": [ 00:44:52.299 { 00:44:52.299 "subsystem": "keyring", 00:44:52.299 "config": [ 00:44:52.299 { 00:44:52.299 "method": "keyring_file_add_key", 00:44:52.299 "params": { 00:44:52.299 "name": "key0", 00:44:52.299 "path": "/tmp/tmp.2U9luZ9dsN" 00:44:52.299 } 00:44:52.299 }, 00:44:52.299 { 00:44:52.299 "method": "keyring_file_add_key", 00:44:52.299 "params": { 00:44:52.299 "name": "key1", 00:44:52.299 "path": "/tmp/tmp.4eLANzQR9T" 00:44:52.299 } 00:44:52.299 } 00:44:52.299 ] 00:44:52.299 }, 00:44:52.299 { 00:44:52.299 "subsystem": "iobuf", 00:44:52.299 "config": [ 00:44:52.299 { 00:44:52.299 "method": "iobuf_set_options", 00:44:52.299 "params": { 00:44:52.299 "enable_numa": false, 00:44:52.299 "large_bufsize": 135168, 00:44:52.299 "large_pool_count": 1024, 00:44:52.299 "small_bufsize": 8192, 00:44:52.299 "small_pool_count": 8192 00:44:52.299 } 00:44:52.299 } 00:44:52.299 ] 00:44:52.299 }, 00:44:52.299 { 00:44:52.299 "subsystem": "sock", 00:44:52.299 "config": [ 00:44:52.299 { 00:44:52.299 "method": "sock_set_default_impl", 00:44:52.299 "params": { 00:44:52.299 "impl_name": "posix" 00:44:52.299 } 00:44:52.299 }, 00:44:52.299 { 00:44:52.299 "method": "sock_impl_set_options", 00:44:52.299 "params": { 00:44:52.299 "enable_ktls": false, 00:44:52.299 "enable_placement_id": 0, 00:44:52.299 "enable_quickack": false, 00:44:52.299 "enable_recv_pipe": true, 00:44:52.299 "enable_zerocopy_send_client": false, 00:44:52.299 "enable_zerocopy_send_server": true, 00:44:52.299 "impl_name": "ssl", 00:44:52.299 "recv_buf_size": 4096, 00:44:52.299 "send_buf_size": 4096, 00:44:52.299 "tls_version": 0, 00:44:52.299 "zerocopy_threshold": 0 00:44:52.299 } 00:44:52.299 }, 00:44:52.299 { 00:44:52.299 "method": "sock_impl_set_options", 00:44:52.299 "params": { 00:44:52.299 "enable_ktls": false, 00:44:52.300 "enable_placement_id": 0, 00:44:52.300 "enable_quickack": false, 00:44:52.300 "enable_recv_pipe": true, 00:44:52.300 "enable_zerocopy_send_client": false, 00:44:52.300 "enable_zerocopy_send_server": true, 00:44:52.300 "impl_name": "posix", 00:44:52.300 "recv_buf_size": 2097152, 00:44:52.300 "send_buf_size": 2097152, 00:44:52.300 "tls_version": 0, 00:44:52.300 "zerocopy_threshold": 0 00:44:52.300 } 00:44:52.300 } 00:44:52.300 ] 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "subsystem": "vmd", 00:44:52.300 "config": [] 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "subsystem": "accel", 00:44:52.300 "config": [ 00:44:52.300 { 00:44:52.300 "method": "accel_set_options", 00:44:52.300 "params": { 00:44:52.300 "buf_count": 2048, 00:44:52.300 "large_cache_size": 16, 00:44:52.300 "sequence_count": 2048, 00:44:52.300 "small_cache_size": 128, 00:44:52.300 "task_count": 2048 00:44:52.300 } 00:44:52.300 } 00:44:52.300 ] 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "subsystem": "bdev", 00:44:52.300 "config": [ 00:44:52.300 { 00:44:52.300 "method": "bdev_set_options", 00:44:52.300 "params": { 00:44:52.300 "bdev_auto_examine": true, 00:44:52.300 "bdev_io_cache_size": 256, 00:44:52.300 "bdev_io_pool_size": 65535, 00:44:52.300 "iobuf_large_cache_size": 16, 00:44:52.300 "iobuf_small_cache_size": 128 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "bdev_raid_set_options", 00:44:52.300 "params": { 00:44:52.300 "process_max_bandwidth_mb_sec": 0, 00:44:52.300 "process_window_size_kb": 1024 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "bdev_iscsi_set_options", 00:44:52.300 "params": { 00:44:52.300 "timeout_sec": 30 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "bdev_nvme_set_options", 00:44:52.300 "params": { 00:44:52.300 "action_on_timeout": "none", 00:44:52.300 "allow_accel_sequence": false, 00:44:52.300 "arbitration_burst": 0, 00:44:52.300 "bdev_retry_count": 3, 00:44:52.300 "ctrlr_loss_timeout_sec": 0, 00:44:52.300 "delay_cmd_submit": true, 00:44:52.300 "dhchap_dhgroups": [ 00:44:52.300 "null", 00:44:52.300 "ffdhe2048", 00:44:52.300 "ffdhe3072", 00:44:52.300 "ffdhe4096", 00:44:52.300 "ffdhe6144", 00:44:52.300 "ffdhe8192" 00:44:52.300 ], 00:44:52.300 "dhchap_digests": [ 00:44:52.300 "sha256", 00:44:52.300 "sha384", 00:44:52.300 "sha512" 00:44:52.300 ], 00:44:52.300 "disable_auto_failback": false, 00:44:52.300 "fast_io_fail_timeout_sec": 0, 00:44:52.300 "generate_uuids": false, 00:44:52.300 "high_priority_weight": 0, 00:44:52.300 "io_path_stat": false, 00:44:52.300 "io_queue_requests": 512, 00:44:52.300 "keep_alive_timeout_ms": 10000, 00:44:52.300 "low_priority_weight": 0, 00:44:52.300 "medium_priority_weight": 0, 00:44:52.300 "nvme_adminq_poll_period_us": 10000, 00:44:52.300 "nvme_error_stat": false, 00:44:52.300 "nvme_ioq_poll_period_us": 0, 00:44:52.300 "rdma_cm_event_timeout_ms": 0, 00:44:52.300 "rdma_max_cq_size": 0, 00:44:52.300 "rdma_srq_size": 0, 00:44:52.300 "reconnect_delay_sec": 0, 00:44:52.300 "timeout_admin_us": 0, 00:44:52.300 "timeout_us": 0, 00:44:52.300 "transport_ack_timeout": 0, 00:44:52.300 "transport_retry_count": 4, 00:44:52.300 "transport_tos": 0 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "bdev_nvme_attach_controller", 00:44:52.300 "params": { 00:44:52.300 "adrfam": "IPv4", 00:44:52.300 "ctrlr_loss_timeout_sec": 0, 00:44:52.300 "ddgst": false, 00:44:52.300 "fast_io_fail_timeout_sec": 0, 00:44:52.300 "hdgst": false, 00:44:52.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:52.300 "multipath": "multipath", 00:44:52.300 "name": "nvme0", 00:44:52.300 "prchk_guard": false, 00:44:52.300 "prchk_reftag": false, 00:44:52.300 "psk": "key0", 00:44:52.300 "reconnect_delay_sec": 0, 00:44:52.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:52.300 "traddr": "127.0.0.1", 00:44:52.300 "trsvcid": "4420", 00:44:52.300 "trtype": "TCP" 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "bdev_nvme_set_hotplug", 00:44:52.300 "params": { 00:44:52.300 "enable": false, 00:44:52.300 "period_us": 100000 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "bdev_wait_for_examine" 00:44:52.300 } 00:44:52.300 ] 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "subsystem": "nbd", 00:44:52.300 "config": [] 00:44:52.300 } 00:44:52.300 ] 00:44:52.300 }' 00:44:52.300 11:28:16 keyring_file -- keyring/file.sh@115 -- # killprocess 111400 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111400 ']' 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111400 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111400 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:52.300 killing process with pid 111400 00:44:52.300 Received shutdown signal, test time was about 1.000000 seconds 00:44:52.300 00:44:52.300 Latency(us) 00:44:52.300 [2024-12-05T11:28:16.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:52.300 [2024-12-05T11:28:16.952Z] =================================================================================================================== 00:44:52.300 [2024-12-05T11:28:16.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111400' 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@973 -- # kill 111400 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@978 -- # wait 111400 00:44:52.300 11:28:16 keyring_file -- keyring/file.sh@118 -- # bperfpid=111875 00:44:52.300 11:28:16 keyring_file -- keyring/file.sh@120 -- # waitforlisten 111875 /var/tmp/bperf.sock 00:44:52.300 11:28:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111875 ']' 00:44:52.300 11:28:16 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:52.300 "subsystems": [ 00:44:52.300 { 00:44:52.300 "subsystem": "keyring", 00:44:52.300 "config": [ 00:44:52.300 { 00:44:52.300 "method": "keyring_file_add_key", 00:44:52.300 "params": { 00:44:52.300 "name": "key0", 00:44:52.300 "path": "/tmp/tmp.2U9luZ9dsN" 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "keyring_file_add_key", 00:44:52.300 "params": { 00:44:52.300 "name": "key1", 00:44:52.300 "path": "/tmp/tmp.4eLANzQR9T" 00:44:52.300 } 00:44:52.300 } 00:44:52.300 ] 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "subsystem": "iobuf", 00:44:52.300 "config": [ 00:44:52.300 { 00:44:52.300 "method": "iobuf_set_options", 00:44:52.300 "params": { 00:44:52.300 "enable_numa": false, 00:44:52.300 "large_bufsize": 135168, 00:44:52.300 "large_pool_count": 1024, 00:44:52.300 "small_bufsize": 8192, 00:44:52.300 "small_pool_count": 8192 00:44:52.300 } 00:44:52.300 } 00:44:52.300 ] 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "subsystem": "sock", 00:44:52.300 "config": [ 00:44:52.300 { 00:44:52.300 "method": "sock_set_default_impl", 00:44:52.300 "params": { 00:44:52.300 "impl_name": "posix" 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "sock_impl_set_options", 00:44:52.300 "params": { 00:44:52.300 "enable_ktls": false, 00:44:52.300 "enable_placement_id": 0, 00:44:52.300 "enable_quickack": false, 00:44:52.300 "enable_recv_pipe": true, 00:44:52.300 "enable_zerocopy_send_client": false, 00:44:52.300 "enable_zerocopy_send_server": true, 00:44:52.300 "impl_name": "ssl", 00:44:52.300 "recv_buf_size": 4096, 00:44:52.300 "send_buf_size": 4096, 00:44:52.300 "tls_version": 0, 00:44:52.300 "zerocopy_threshold": 0 00:44:52.300 } 00:44:52.300 }, 00:44:52.300 { 00:44:52.300 "method": "sock_impl_set_options", 00:44:52.301 "params": { 00:44:52.301 "enable_ktls": false, 00:44:52.301 "enable_placement_id": 0, 00:44:52.301 "enable_quickack": false, 00:44:52.301 "enable_recv_pipe": true, 00:44:52.301 "enable_zerocopy_send_client": false, 00:44:52.301 "enable_zerocopy_send_server": true, 00:44:52.301 "impl_name": "posix", 00:44:52.301 "recv_buf_size": 2097152, 00:44:52.301 "send_buf_size": 2097152, 00:44:52.301 "tls_version": 0, 00:44:52.301 "zerocopy_threshold": 0 00:44:52.301 } 00:44:52.301 } 00:44:52.301 ] 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "subsystem": "vmd", 00:44:52.301 "config": [] 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "subsystem": "accel", 00:44:52.301 "config": [ 00:44:52.301 { 00:44:52.301 "method": "accel_set_options", 00:44:52.301 "params": { 00:44:52.301 "buf_count": 2048, 00:44:52.301 "large_cache_size": 16, 00:44:52.301 "sequence_count": 2048, 00:44:52.301 "small_cache_size": 128, 00:44:52.301 "task_count": 2048 00:44:52.301 } 00:44:52.301 } 00:44:52.301 ] 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "subsystem": "bdev", 00:44:52.301 "config": [ 00:44:52.301 { 00:44:52.301 "method": "bdev_set_options", 00:44:52.301 "params": { 00:44:52.301 "bdev_auto_examine": true, 00:44:52.301 "bdev_io_cache_size": 256, 00:44:52.301 "bdev_io_pool_size": 65535, 00:44:52.301 "iobuf_large_cache_size": 16, 00:44:52.301 "iobuf_small_cache_size": 128 00:44:52.301 } 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "method": "bdev_raid_set_options", 00:44:52.301 "params": { 00:44:52.301 "process_max_bandwidth_mb_sec": 0, 00:44:52.301 "process_window_size_kb": 1024 00:44:52.301 } 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "method": "bdev_iscsi_set_options", 00:44:52.301 "params": { 00:44:52.301 "timeout_sec": 30 00:44:52.301 } 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "method": "bdev_nvme_set_options", 00:44:52.301 "params": { 00:44:52.301 "action_on_timeout": "none", 00:44:52.301 "allow_accel_sequence": false, 00:44:52.301 "arbitration_burst": 0, 00:44:52.301 "bdev_retry_count": 3, 00:44:52.301 "ctrlr_loss_timeout_sec": 0, 00:44:52.301 "delay_cmd_submit": true, 00:44:52.301 "dhchap_dhgroups": [ 00:44:52.301 "null", 00:44:52.301 "ffdhe2048", 00:44:52.301 "ffdhe3072", 00:44:52.301 "ffdhe4096", 00:44:52.301 "ffdhe6144", 00:44:52.301 "ffdhe8192" 00:44:52.301 ], 00:44:52.301 "dhchap_digests": [ 00:44:52.301 "sha256", 00:44:52.301 "sha384", 00:44:52.301 "sha512" 00:44:52.301 ], 00:44:52.301 "disable_auto_failback": false, 00:44:52.301 "fast_io_fail_timeout_sec": 0, 00:44:52.301 "generate_uuids": false, 00:44:52.301 "high_priority_weight": 0, 00:44:52.301 "io_path_stat": false, 00:44:52.301 "io_queue_requests": 512, 00:44:52.301 "keep_alive_timeout_ms": 10000, 00:44:52.301 "low_priority_weight": 0, 00:44:52.301 "medium_priority_weight": 0, 00:44:52.301 "nvme_adminq_poll_period_us": 10000, 00:44:52.301 "nvme_error_stat": false, 00:44:52.301 "nvme_ioq_poll_period_us": 0, 00:44:52.301 "rdma_cm_event_timeout_ms": 0, 00:44:52.301 "rdma_max_cq_size": 0, 00:44:52.301 "rdma_srq_size": 0, 00:44:52.301 "reconnect_delay_sec": 0, 00:44:52.301 "timeout_admin_us": 0, 00:44:52.301 "timeout_us": 0, 00:44:52.301 "transport_ack_timeout": 0, 00:44:52.301 "transport_retry_count": 4, 00:44:52.301 "transport_tos": 0 00:44:52.301 } 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "method": "bdev_nvme_attach_controller", 00:44:52.301 "params": { 00:44:52.301 "adrfam": "IPv4", 00:44:52.301 "ctrlr_loss_timeout_sec": 0, 00:44:52.301 "ddgst": false, 00:44:52.301 "fast_io_fail_timeout_sec": 0, 00:44:52.301 "hdgst": false, 00:44:52.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:52.301 "multipath": "multipath", 00:44:52.301 "name": "nvme0", 00:44:52.301 "prchk_guard": false, 00:44:52.301 "prchk_reftag": false, 00:44:52.301 "psk": "key0", 00:44:52.301 "reconnect_delay_sec": 0, 00:44:52.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:52.301 "traddr": "127.0.0.1", 00:44:52.301 "trsvcid": "4420", 00:44:52.301 "trtype": "TCP" 00:44:52.301 } 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "method": "bdev_nvme_set_hotplug", 00:44:52.301 "params": { 00:44:52.301 "enable": false, 00:44:52.301 "period_us": 100000 00:44:52.301 } 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "method": "bdev_wait_for_examine" 00:44:52.301 } 00:44:52.301 ] 00:44:52.301 }, 00:44:52.301 { 00:44:52.301 "subsystem": "nbd", 00:44:52.301 "config": [] 00:44:52.301 } 00:44:52.301 ] 00:44:52.301 }' 00:44:52.301 11:28:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:52.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:52.301 11:28:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:52.301 11:28:16 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:52.301 11:28:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:52.301 11:28:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:52.301 11:28:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:52.301 [2024-12-05 11:28:16.947033] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:44:52.301 [2024-12-05 11:28:16.947128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111875 ] 00:44:52.561 [2024-12-05 11:28:17.093828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:52.561 [2024-12-05 11:28:17.150056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:52.820 [2024-12-05 11:28:17.319165] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:53.386 11:28:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:53.386 11:28:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:53.386 11:28:17 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:53.386 11:28:17 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:53.386 11:28:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:53.644 11:28:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:53.644 11:28:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:53.644 11:28:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:53.644 11:28:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:53.644 11:28:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:53.644 11:28:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:53.644 11:28:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:53.902 11:28:18 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:53.902 11:28:18 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:53.902 11:28:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:53.902 11:28:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:53.902 11:28:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:53.902 11:28:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:53.902 11:28:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:54.161 11:28:18 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:54.161 11:28:18 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:54.161 11:28:18 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:54.161 11:28:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:54.420 11:28:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:54.421 11:28:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:54.421 11:28:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2U9luZ9dsN /tmp/tmp.4eLANzQR9T 00:44:54.421 11:28:19 keyring_file -- keyring/file.sh@20 -- # killprocess 111875 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111875 ']' 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111875 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111875 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:54.421 killing process with pid 111875 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111875' 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@973 -- # kill 111875 00:44:54.421 Received shutdown signal, test time was about 1.000000 seconds 00:44:54.421 00:44:54.421 Latency(us) 00:44:54.421 [2024-12-05T11:28:19.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:54.421 [2024-12-05T11:28:19.073Z] =================================================================================================================== 00:44:54.421 [2024-12-05T11:28:19.073Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:54.421 11:28:19 keyring_file -- common/autotest_common.sh@978 -- # wait 111875 00:44:54.680 11:28:19 keyring_file -- keyring/file.sh@21 -- # killprocess 111365 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111365 ']' 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111365 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111365 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111365' 00:44:54.680 killing process with pid 111365 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@973 -- # kill 111365 00:44:54.680 11:28:19 keyring_file -- common/autotest_common.sh@978 -- # wait 111365 00:44:55.247 00:44:55.247 real 0m16.610s 00:44:55.247 user 0m40.323s 00:44:55.247 sys 0m3.823s 00:44:55.247 11:28:19 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:55.247 11:28:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:55.247 ************************************ 00:44:55.247 END TEST keyring_file 00:44:55.247 ************************************ 00:44:55.247 11:28:19 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:55.247 11:28:19 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:44:55.247 11:28:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:55.247 11:28:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:55.247 11:28:19 -- common/autotest_common.sh@10 -- # set +x 00:44:55.247 ************************************ 00:44:55.247 START TEST keyring_linux 00:44:55.247 ************************************ 00:44:55.247 11:28:19 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:44:55.247 Joined session keyring: 879056343 00:44:55.507 * Looking for test storage... 00:44:55.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:44:55.507 11:28:19 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:55.507 11:28:19 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:55.507 11:28:19 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:55.507 11:28:20 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:55.507 11:28:20 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:55.507 11:28:20 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:55.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:55.507 --rc genhtml_branch_coverage=1 00:44:55.507 --rc genhtml_function_coverage=1 00:44:55.507 --rc genhtml_legend=1 00:44:55.507 --rc geninfo_all_blocks=1 00:44:55.507 --rc geninfo_unexecuted_blocks=1 00:44:55.507 00:44:55.507 ' 00:44:55.507 11:28:20 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:55.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:55.507 --rc genhtml_branch_coverage=1 00:44:55.507 --rc genhtml_function_coverage=1 00:44:55.507 --rc genhtml_legend=1 00:44:55.507 --rc geninfo_all_blocks=1 00:44:55.507 --rc geninfo_unexecuted_blocks=1 00:44:55.507 00:44:55.507 ' 00:44:55.507 11:28:20 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:55.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:55.507 --rc genhtml_branch_coverage=1 00:44:55.507 --rc genhtml_function_coverage=1 00:44:55.507 --rc genhtml_legend=1 00:44:55.507 --rc geninfo_all_blocks=1 00:44:55.507 --rc geninfo_unexecuted_blocks=1 00:44:55.507 00:44:55.507 ' 00:44:55.507 11:28:20 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:55.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:55.507 --rc genhtml_branch_coverage=1 00:44:55.507 --rc genhtml_function_coverage=1 00:44:55.507 --rc genhtml_legend=1 00:44:55.507 --rc geninfo_all_blocks=1 00:44:55.507 --rc geninfo_unexecuted_blocks=1 00:44:55.507 00:44:55.507 ' 00:44:55.507 11:28:20 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:44:55.507 11:28:20 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=f58d48c7-b3e4-4841-baf8-b1718dbfb2a6 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:55.507 11:28:20 keyring_linux -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:55.507 11:28:20 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:55.507 11:28:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:55.507 11:28:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:55.508 11:28:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:55.508 11:28:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:55.508 11:28:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:44:55.508 11:28:20 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:44:55.508 11:28:20 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:44:55.508 11:28:20 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:44:55.508 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:55.508 11:28:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:55.508 11:28:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:55.508 11:28:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:55.508 11:28:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:55.508 11:28:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:55.508 11:28:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:44:55.508 11:28:20 keyring_linux -- nvmf/common.sh@507 -- # python - 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:55.508 /tmp/:spdk-test:key0 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:55.508 11:28:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:55.508 11:28:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:55.767 11:28:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:55.767 11:28:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:55.767 11:28:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:55.767 11:28:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:55.767 11:28:20 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:55.767 11:28:20 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:44:55.767 11:28:20 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:44:55.767 11:28:20 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:44:55.767 11:28:20 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:44:55.767 11:28:20 keyring_linux -- nvmf/common.sh@507 -- # python - 00:44:55.767 11:28:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:55.767 /tmp/:spdk-test:key1 00:44:55.767 11:28:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:55.767 11:28:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=112037 00:44:55.767 11:28:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 112037 00:44:55.767 11:28:20 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:55.767 11:28:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 112037 ']' 00:44:55.767 11:28:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:55.767 11:28:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:55.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:55.767 11:28:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:55.767 11:28:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:55.767 11:28:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:55.767 [2024-12-05 11:28:20.282613] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:44:55.767 [2024-12-05 11:28:20.282711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112037 ] 00:44:56.026 [2024-12-05 11:28:20.441738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.026 [2024-12-05 11:28:20.534931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:56.962 11:28:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:56.962 [2024-12-05 11:28:21.332195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:56.962 null0 00:44:56.962 [2024-12-05 11:28:21.364140] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:56.962 [2024-12-05 11:28:21.364413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.962 11:28:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:56.962 598315848 00:44:56.962 11:28:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:56.962 871637793 00:44:56.962 11:28:21 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:56.962 11:28:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=112069 00:44:56.962 11:28:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 112069 /var/tmp/bperf.sock 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 112069 ']' 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:56.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:56.962 11:28:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:56.962 [2024-12-05 11:28:21.450934] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:44:56.962 [2024-12-05 11:28:21.451518] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112069 ] 00:44:56.962 [2024-12-05 11:28:21.608321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:57.222 [2024-12-05 11:28:21.702693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:57.790 11:28:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:57.790 11:28:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:57.790 11:28:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:57.790 11:28:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:58.049 11:28:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:58.049 11:28:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:58.616 11:28:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:58.616 11:28:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:58.874 [2024-12-05 11:28:23.280578] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:58.874 nvme0n1 00:44:58.874 11:28:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:58.874 11:28:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:58.874 11:28:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:58.874 11:28:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:58.874 11:28:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:58.874 11:28:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:59.132 11:28:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:59.132 11:28:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:59.132 11:28:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:59.133 11:28:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:59.133 11:28:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:59.133 11:28:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:59.133 11:28:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:59.391 11:28:23 keyring_linux -- keyring/linux.sh@25 -- # sn=598315848 00:44:59.391 11:28:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:59.391 11:28:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:59.391 11:28:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 598315848 == \5\9\8\3\1\5\8\4\8 ]] 00:44:59.391 11:28:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 598315848 00:44:59.391 11:28:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:59.391 11:28:23 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:59.649 Running I/O for 1 seconds... 00:45:00.582 16982.00 IOPS, 66.34 MiB/s 00:45:00.582 Latency(us) 00:45:00.582 [2024-12-05T11:28:25.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:00.582 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:45:00.582 nvme0n1 : 1.01 16980.15 66.33 0.00 0.00 7506.77 2730.67 9611.95 00:45:00.582 [2024-12-05T11:28:25.234Z] =================================================================================================================== 00:45:00.582 [2024-12-05T11:28:25.234Z] Total : 16980.15 66.33 0.00 0.00 7506.77 2730.67 9611.95 00:45:00.582 { 00:45:00.582 "results": [ 00:45:00.582 { 00:45:00.582 "job": "nvme0n1", 00:45:00.582 "core_mask": "0x2", 00:45:00.582 "workload": "randread", 00:45:00.582 "status": "finished", 00:45:00.582 "queue_depth": 128, 00:45:00.582 "io_size": 4096, 00:45:00.582 "runtime": 1.007647, 00:45:00.582 "iops": 16980.152771754394, 00:45:00.582 "mibps": 66.3287217646656, 00:45:00.582 "io_failed": 0, 00:45:00.582 "io_timeout": 0, 00:45:00.582 "avg_latency_us": 7506.767049288915, 00:45:00.582 "min_latency_us": 2730.6666666666665, 00:45:00.582 "max_latency_us": 9611.946666666667 00:45:00.582 } 00:45:00.582 ], 00:45:00.582 "core_count": 1 00:45:00.582 } 00:45:00.582 11:28:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:45:00.582 11:28:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:45:00.839 11:28:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:45:00.839 11:28:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:45:00.839 11:28:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:45:00.839 11:28:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:45:00.839 11:28:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:45:00.839 11:28:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:45:01.096 11:28:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:45:01.096 11:28:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:45:01.096 11:28:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:45:01.096 11:28:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:01.096 11:28:25 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:45:01.096 11:28:25 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:01.096 11:28:25 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:45:01.096 11:28:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:01.096 11:28:25 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:45:01.096 11:28:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:45:01.096 11:28:25 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:01.096 11:28:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:45:01.354 [2024-12-05 11:28:25.926837] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-12-05 11:28:25.926919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6c4b0 (107): Transport endpoint is not connected 00:45:01.354 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:45:01.354 [2024-12-05 11:28:25.927910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6c4b0 (9): Bad file descriptor 00:45:01.354 [2024-12-05 11:28:25.928909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:45:01.354 [2024-12-05 11:28:25.929046] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:45:01.354 [2024-12-05 11:28:25.929121] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:45:01.354 [2024-12-05 11:28:25.929221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:45:01.354 2024/12/05 11:28:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:45:01.354 request: 00:45:01.354 { 00:45:01.354 "method": "bdev_nvme_attach_controller", 00:45:01.354 "params": { 00:45:01.354 "name": "nvme0", 00:45:01.354 "trtype": "tcp", 00:45:01.354 "traddr": "127.0.0.1", 00:45:01.354 "adrfam": "ipv4", 00:45:01.354 "trsvcid": "4420", 00:45:01.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:01.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:01.354 "prchk_reftag": false, 00:45:01.354 "prchk_guard": false, 00:45:01.354 "hdgst": false, 00:45:01.354 "ddgst": false, 00:45:01.354 "psk": ":spdk-test:key1", 00:45:01.354 "allow_unrecognized_csi": false 00:45:01.354 } 00:45:01.354 } 00:45:01.354 Got JSON-RPC error response 00:45:01.354 GoRPCClient: error on JSON-RPC call 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@33 -- # sn=598315848 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 598315848 00:45:01.354 1 links removed 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@33 -- # sn=871637793 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 871637793 00:45:01.354 1 links removed 00:45:01.354 11:28:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 112069 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 112069 ']' 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 112069 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:01.354 11:28:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112069 00:45:01.611 11:28:26 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:45:01.611 11:28:26 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:45:01.611 killing process with pid 112069 00:45:01.611 11:28:26 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112069' 00:45:01.611 11:28:26 keyring_linux -- common/autotest_common.sh@973 -- # kill 112069 00:45:01.611 Received shutdown signal, test time was about 1.000000 seconds 00:45:01.611 00:45:01.611 Latency(us) 00:45:01.611 [2024-12-05T11:28:26.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:01.611 [2024-12-05T11:28:26.263Z] =================================================================================================================== 00:45:01.611 [2024-12-05T11:28:26.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:01.611 11:28:26 keyring_linux -- common/autotest_common.sh@978 -- # wait 112069 00:45:01.868 11:28:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 112037 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 112037 ']' 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 112037 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112037 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:01.868 killing process with pid 112037 00:45:01.868 11:28:26 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112037' 00:45:01.869 11:28:26 keyring_linux -- common/autotest_common.sh@973 -- # kill 112037 00:45:01.869 11:28:26 keyring_linux -- common/autotest_common.sh@978 -- # wait 112037 00:45:02.434 00:45:02.434 real 0m7.017s 00:45:02.434 user 0m12.879s 00:45:02.434 sys 0m2.178s 00:45:02.434 11:28:26 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:02.434 11:28:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:45:02.434 ************************************ 00:45:02.434 END TEST keyring_linux 00:45:02.434 ************************************ 00:45:02.434 11:28:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:45:02.434 11:28:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:45:02.434 11:28:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:45:02.434 11:28:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:45:02.434 11:28:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:45:02.434 11:28:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:45:02.434 11:28:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:45:02.434 11:28:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:02.434 11:28:26 -- common/autotest_common.sh@10 -- # set +x 00:45:02.434 11:28:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:45:02.434 11:28:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:45:02.434 11:28:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:45:02.434 11:28:26 -- common/autotest_common.sh@10 -- # set +x 00:45:04.995 INFO: APP EXITING 00:45:04.995 INFO: killing all VMs 00:45:04.995 INFO: killing vhost app 00:45:04.995 INFO: EXIT DONE 00:45:05.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:05.560 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:45:05.560 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:45:06.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:06.496 Cleaning 00:45:06.496 Removing: /var/run/dpdk/spdk0/config 00:45:06.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:06.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:06.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:06.496 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:06.496 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:06.496 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:06.496 Removing: /var/run/dpdk/spdk1/config 00:45:06.496 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:06.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:06.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:06.497 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:06.497 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:06.497 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:06.497 Removing: /var/run/dpdk/spdk2/config 00:45:06.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:06.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:06.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:06.497 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:06.497 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:06.497 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:06.497 Removing: /var/run/dpdk/spdk3/config 00:45:06.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:06.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:06.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:06.497 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:06.497 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:06.497 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:06.497 Removing: /var/run/dpdk/spdk4/config 00:45:06.497 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:06.497 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:06.497 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:06.497 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:06.497 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:06.497 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:06.497 Removing: /dev/shm/nvmf_trace.0 00:45:06.497 Removing: /dev/shm/spdk_tgt_trace.pid58711 00:45:06.497 Removing: /var/run/dpdk/spdk0 00:45:06.497 Removing: /var/run/dpdk/spdk1 00:45:06.497 Removing: /var/run/dpdk/spdk2 00:45:06.497 Removing: /var/run/dpdk/spdk3 00:45:06.497 Removing: /var/run/dpdk/spdk4 00:45:06.497 Removing: /var/run/dpdk/spdk_pid101758 00:45:06.497 Removing: /var/run/dpdk/spdk_pid101806 00:45:06.497 Removing: /var/run/dpdk/spdk_pid102162 00:45:06.497 Removing: /var/run/dpdk/spdk_pid102208 00:45:06.755 Removing: /var/run/dpdk/spdk_pid102603 00:45:06.755 Removing: /var/run/dpdk/spdk_pid103158 00:45:06.755 Removing: /var/run/dpdk/spdk_pid103594 00:45:06.756 Removing: /var/run/dpdk/spdk_pid104645 00:45:06.756 Removing: /var/run/dpdk/spdk_pid105678 00:45:06.756 Removing: /var/run/dpdk/spdk_pid105795 00:45:06.756 Removing: /var/run/dpdk/spdk_pid105853 00:45:06.756 Removing: /var/run/dpdk/spdk_pid107456 00:45:06.756 Removing: /var/run/dpdk/spdk_pid107772 00:45:06.756 Removing: /var/run/dpdk/spdk_pid108094 00:45:06.756 Removing: /var/run/dpdk/spdk_pid108692 00:45:06.756 Removing: /var/run/dpdk/spdk_pid108697 00:45:06.756 Removing: /var/run/dpdk/spdk_pid109116 00:45:06.756 Removing: /var/run/dpdk/spdk_pid109275 00:45:06.756 Removing: /var/run/dpdk/spdk_pid109433 00:45:06.756 Removing: /var/run/dpdk/spdk_pid109529 00:45:06.756 Removing: /var/run/dpdk/spdk_pid109687 00:45:06.756 Removing: /var/run/dpdk/spdk_pid109795 00:45:06.756 Removing: /var/run/dpdk/spdk_pid110510 00:45:06.756 Removing: /var/run/dpdk/spdk_pid110545 00:45:06.756 Removing: /var/run/dpdk/spdk_pid110585 00:45:06.756 Removing: /var/run/dpdk/spdk_pid110843 00:45:06.756 Removing: /var/run/dpdk/spdk_pid110874 00:45:06.756 Removing: /var/run/dpdk/spdk_pid110908 00:45:06.756 Removing: /var/run/dpdk/spdk_pid111365 00:45:06.756 Removing: /var/run/dpdk/spdk_pid111400 00:45:06.756 Removing: /var/run/dpdk/spdk_pid111875 00:45:06.756 Removing: /var/run/dpdk/spdk_pid112037 00:45:06.756 Removing: /var/run/dpdk/spdk_pid112069 00:45:06.756 Removing: /var/run/dpdk/spdk_pid58558 00:45:06.756 Removing: /var/run/dpdk/spdk_pid58711 00:45:06.756 Removing: /var/run/dpdk/spdk_pid58967 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59059 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59099 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59208 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59238 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59372 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59652 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59838 00:45:06.756 Removing: /var/run/dpdk/spdk_pid59928 00:45:06.756 Removing: /var/run/dpdk/spdk_pid60022 00:45:06.756 Removing: /var/run/dpdk/spdk_pid60121 00:45:06.756 Removing: /var/run/dpdk/spdk_pid60164 00:45:06.756 Removing: /var/run/dpdk/spdk_pid60195 00:45:06.756 Removing: /var/run/dpdk/spdk_pid60259 00:45:06.756 Removing: /var/run/dpdk/spdk_pid60382 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61021 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61085 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61141 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61155 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61234 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61262 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61341 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61369 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61425 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61451 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61503 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61520 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61676 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61712 00:45:06.756 Removing: /var/run/dpdk/spdk_pid61793 00:45:06.756 Removing: /var/run/dpdk/spdk_pid62265 00:45:06.756 Removing: /var/run/dpdk/spdk_pid62626 00:45:06.756 Removing: /var/run/dpdk/spdk_pid65173 00:45:06.756 Removing: /var/run/dpdk/spdk_pid65224 00:45:06.756 Removing: /var/run/dpdk/spdk_pid65571 00:45:06.756 Removing: /var/run/dpdk/spdk_pid65626 00:45:06.756 Removing: /var/run/dpdk/spdk_pid66045 00:45:06.756 Removing: /var/run/dpdk/spdk_pid66617 00:45:06.756 Removing: /var/run/dpdk/spdk_pid67078 00:45:07.014 Removing: /var/run/dpdk/spdk_pid68124 00:45:07.014 Removing: /var/run/dpdk/spdk_pid69210 00:45:07.014 Removing: /var/run/dpdk/spdk_pid69329 00:45:07.014 Removing: /var/run/dpdk/spdk_pid69397 00:45:07.014 Removing: /var/run/dpdk/spdk_pid71011 00:45:07.014 Removing: /var/run/dpdk/spdk_pid71364 00:45:07.014 Removing: /var/run/dpdk/spdk_pid75224 00:45:07.014 Removing: /var/run/dpdk/spdk_pid75646 00:45:07.014 Removing: /var/run/dpdk/spdk_pid76271 00:45:07.014 Removing: /var/run/dpdk/spdk_pid76821 00:45:07.014 Removing: /var/run/dpdk/spdk_pid82705 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83210 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83316 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83467 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83506 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83558 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83597 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83776 00:45:07.014 Removing: /var/run/dpdk/spdk_pid83923 00:45:07.014 Removing: /var/run/dpdk/spdk_pid84195 00:45:07.014 Removing: /var/run/dpdk/spdk_pid84310 00:45:07.014 Removing: /var/run/dpdk/spdk_pid84580 00:45:07.014 Removing: /var/run/dpdk/spdk_pid84705 00:45:07.014 Removing: /var/run/dpdk/spdk_pid84830 00:45:07.014 Removing: /var/run/dpdk/spdk_pid85224 00:45:07.014 Removing: /var/run/dpdk/spdk_pid85664 00:45:07.014 Removing: /var/run/dpdk/spdk_pid85665 00:45:07.014 Removing: /var/run/dpdk/spdk_pid85666 00:45:07.014 Removing: /var/run/dpdk/spdk_pid85947 00:45:07.014 Removing: /var/run/dpdk/spdk_pid86229 00:45:07.014 Removing: /var/run/dpdk/spdk_pid86660 00:45:07.014 Removing: /var/run/dpdk/spdk_pid87019 00:45:07.014 Removing: /var/run/dpdk/spdk_pid87620 00:45:07.014 Removing: /var/run/dpdk/spdk_pid87622 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88031 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88048 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88062 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88099 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88105 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88492 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88541 00:45:07.015 Removing: /var/run/dpdk/spdk_pid88918 00:45:07.015 Removing: /var/run/dpdk/spdk_pid89170 00:45:07.015 Removing: /var/run/dpdk/spdk_pid89715 00:45:07.015 Removing: /var/run/dpdk/spdk_pid90321 00:45:07.015 Removing: /var/run/dpdk/spdk_pid91703 00:45:07.015 Removing: /var/run/dpdk/spdk_pid92368 00:45:07.015 Removing: /var/run/dpdk/spdk_pid92370 00:45:07.015 Removing: /var/run/dpdk/spdk_pid94679 00:45:07.015 Removing: /var/run/dpdk/spdk_pid94750 00:45:07.015 Removing: /var/run/dpdk/spdk_pid94841 00:45:07.015 Removing: /var/run/dpdk/spdk_pid94919 00:45:07.015 Removing: /var/run/dpdk/spdk_pid95061 00:45:07.015 Removing: /var/run/dpdk/spdk_pid95149 00:45:07.015 Removing: /var/run/dpdk/spdk_pid95239 00:45:07.015 Removing: /var/run/dpdk/spdk_pid95324 00:45:07.015 Removing: /var/run/dpdk/spdk_pid95698 00:45:07.015 Removing: /var/run/dpdk/spdk_pid96448 00:45:07.015 Removing: /var/run/dpdk/spdk_pid97863 00:45:07.015 Removing: /var/run/dpdk/spdk_pid98070 00:45:07.015 Removing: /var/run/dpdk/spdk_pid98355 00:45:07.015 Removing: /var/run/dpdk/spdk_pid98908 00:45:07.015 Removing: /var/run/dpdk/spdk_pid99299 00:45:07.015 Clean 00:45:07.274 11:28:31 -- common/autotest_common.sh@1453 -- # return 0 00:45:07.274 11:28:31 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:07.274 11:28:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:07.274 11:28:31 -- common/autotest_common.sh@10 -- # set +x 00:45:07.274 11:28:31 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:07.274 11:28:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:07.274 11:28:31 -- common/autotest_common.sh@10 -- # set +x 00:45:07.274 11:28:31 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:45:07.274 11:28:31 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:45:07.274 11:28:31 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:45:07.274 11:28:31 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:07.274 11:28:31 -- spdk/autotest.sh@398 -- # hostname 00:45:07.274 11:28:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:45:07.533 geninfo: WARNING: invalid characters removed from testname! 00:45:34.071 11:28:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:35.459 11:28:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:37.981 11:29:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:39.883 11:29:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:42.412 11:29:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:44.944 11:29:09 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:45:46.852 11:29:11 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:46.852 11:29:11 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:46.852 11:29:11 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:45:46.852 11:29:11 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:46.852 11:29:11 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:46.852 11:29:11 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:45:46.852 + [[ -n 5254 ]] 00:45:46.852 + sudo kill 5254 00:45:47.118 [Pipeline] } 00:45:47.134 [Pipeline] // timeout 00:45:47.140 [Pipeline] } 00:45:47.155 [Pipeline] // stage 00:45:47.161 [Pipeline] } 00:45:47.176 [Pipeline] // catchError 00:45:47.186 [Pipeline] stage 00:45:47.188 [Pipeline] { (Stop VM) 00:45:47.200 [Pipeline] sh 00:45:47.479 + vagrant halt 00:45:50.845 ==> default: Halting domain... 00:45:57.434 [Pipeline] sh 00:45:57.712 + vagrant destroy -f 00:46:00.993 ==> default: Removing domain... 00:46:01.003 [Pipeline] sh 00:46:01.281 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:46:01.288 [Pipeline] } 00:46:01.302 [Pipeline] // stage 00:46:01.307 [Pipeline] } 00:46:01.320 [Pipeline] // dir 00:46:01.325 [Pipeline] } 00:46:01.338 [Pipeline] // wrap 00:46:01.344 [Pipeline] } 00:46:01.355 [Pipeline] // catchError 00:46:01.363 [Pipeline] stage 00:46:01.365 [Pipeline] { (Epilogue) 00:46:01.377 [Pipeline] sh 00:46:01.655 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:08.238 [Pipeline] catchError 00:46:08.240 [Pipeline] { 00:46:08.252 [Pipeline] sh 00:46:08.534 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:08.792 Artifacts sizes are good 00:46:08.801 [Pipeline] } 00:46:08.815 [Pipeline] // catchError 00:46:08.826 [Pipeline] archiveArtifacts 00:46:08.833 Archiving artifacts 00:46:08.954 [Pipeline] cleanWs 00:46:08.969 [WS-CLEANUP] Deleting project workspace... 00:46:08.969 [WS-CLEANUP] Deferred wipeout is used... 00:46:08.976 [WS-CLEANUP] done 00:46:08.978 [Pipeline] } 00:46:08.994 [Pipeline] // stage 00:46:09.000 [Pipeline] } 00:46:09.014 [Pipeline] // node 00:46:09.019 [Pipeline] End of Pipeline 00:46:09.054 Finished: SUCCESS